Oct 11 07:40:10 crc systemd[1]: Starting Kubernetes Kubelet... Oct 11 07:40:10 crc restorecon[4733]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 11 07:40:10 crc restorecon[4733]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 11 07:40:11 crc restorecon[4733]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 11 07:40:12 crc kubenswrapper[5016]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 07:40:12 crc kubenswrapper[5016]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 11 07:40:12 crc kubenswrapper[5016]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 07:40:12 crc kubenswrapper[5016]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 07:40:12 crc kubenswrapper[5016]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 11 07:40:12 crc kubenswrapper[5016]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.773718 5016 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781108 5016 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781150 5016 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781155 5016 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781159 5016 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781163 5016 feature_gate.go:330] unrecognized feature gate: Example Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781167 5016 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781175 5016 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781179 5016 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781182 5016 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781186 5016 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781190 5016 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781194 5016 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781198 5016 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781202 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781207 5016 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781242 5016 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781246 5016 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781250 5016 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781254 5016 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781259 5016 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781263 5016 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781269 5016 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781274 5016 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781279 5016 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781284 5016 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781288 5016 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781293 5016 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781299 5016 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781304 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781309 5016 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781313 5016 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781317 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781321 5016 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781326 5016 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781332 5016 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781338 5016 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781343 5016 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781348 5016 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781353 5016 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781358 5016 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781362 5016 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781366 5016 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781370 5016 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781375 5016 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781378 5016 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781382 5016 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781386 5016 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781389 5016 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781393 5016 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781397 5016 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781401 5016 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781405 5016 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781409 5016 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781412 5016 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781416 5016 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781419 5016 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781423 5016 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781426 5016 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781429 5016 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781433 5016 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781437 5016 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781440 5016 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781444 5016 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781450 5016 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781454 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781458 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781461 5016 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781465 5016 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781468 5016 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781474 5016 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.781478 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781588 5016 flags.go:64] FLAG: --address="0.0.0.0" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781601 5016 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781613 5016 flags.go:64] FLAG: --anonymous-auth="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781619 5016 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781625 5016 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781630 5016 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781637 5016 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781644 5016 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781649 5016 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781672 5016 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781677 5016 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781682 5016 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781687 5016 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781692 5016 flags.go:64] FLAG: --cgroup-root="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781697 5016 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781702 5016 flags.go:64] FLAG: --client-ca-file="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781706 5016 flags.go:64] FLAG: --cloud-config="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781710 5016 flags.go:64] FLAG: --cloud-provider="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781714 5016 flags.go:64] FLAG: --cluster-dns="[]" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781720 5016 flags.go:64] FLAG: --cluster-domain="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781724 5016 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781729 5016 flags.go:64] FLAG: --config-dir="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781733 5016 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781738 5016 flags.go:64] FLAG: --container-log-max-files="5" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781744 5016 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781749 5016 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781753 5016 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781758 5016 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781763 5016 flags.go:64] FLAG: --contention-profiling="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781767 5016 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781779 5016 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781784 5016 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781789 5016 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781795 5016 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781799 5016 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781803 5016 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781808 5016 flags.go:64] FLAG: --enable-load-reader="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781812 5016 flags.go:64] FLAG: --enable-server="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781817 5016 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781838 5016 flags.go:64] FLAG: --event-burst="100" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781843 5016 flags.go:64] FLAG: --event-qps="50" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781847 5016 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781852 5016 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781856 5016 flags.go:64] FLAG: --eviction-hard="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781862 5016 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781866 5016 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781871 5016 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781876 5016 flags.go:64] FLAG: --eviction-soft="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781880 5016 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781884 5016 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781888 5016 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781893 5016 flags.go:64] FLAG: --experimental-mounter-path="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781897 5016 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781902 5016 flags.go:64] FLAG: --fail-swap-on="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781906 5016 flags.go:64] FLAG: --feature-gates="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781911 5016 flags.go:64] FLAG: --file-check-frequency="20s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781915 5016 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781920 5016 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781924 5016 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781929 5016 flags.go:64] FLAG: --healthz-port="10248" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781934 5016 flags.go:64] FLAG: --help="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781938 5016 flags.go:64] FLAG: --hostname-override="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781946 5016 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781951 5016 flags.go:64] FLAG: --http-check-frequency="20s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781956 5016 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781961 5016 flags.go:64] FLAG: --image-credential-provider-config="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781965 5016 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781970 5016 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781974 5016 flags.go:64] FLAG: --image-service-endpoint="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781978 5016 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781982 5016 flags.go:64] FLAG: --kube-api-burst="100" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781987 5016 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781992 5016 flags.go:64] FLAG: --kube-api-qps="50" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.781996 5016 flags.go:64] FLAG: --kube-reserved="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782000 5016 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782004 5016 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782008 5016 flags.go:64] FLAG: --kubelet-cgroups="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782013 5016 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782017 5016 flags.go:64] FLAG: --lock-file="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782022 5016 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782026 5016 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782030 5016 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782037 5016 flags.go:64] FLAG: --log-json-split-stream="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782042 5016 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782046 5016 flags.go:64] FLAG: --log-text-split-stream="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782051 5016 flags.go:64] FLAG: --logging-format="text" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782055 5016 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782059 5016 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782064 5016 flags.go:64] FLAG: --manifest-url="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782068 5016 flags.go:64] FLAG: --manifest-url-header="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782074 5016 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782078 5016 flags.go:64] FLAG: --max-open-files="1000000" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782084 5016 flags.go:64] FLAG: --max-pods="110" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782088 5016 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782096 5016 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782101 5016 flags.go:64] FLAG: --memory-manager-policy="None" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782105 5016 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782110 5016 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782114 5016 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782119 5016 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782132 5016 flags.go:64] FLAG: --node-status-max-images="50" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782137 5016 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782142 5016 flags.go:64] FLAG: --oom-score-adj="-999" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782147 5016 flags.go:64] FLAG: --pod-cidr="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782164 5016 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782173 5016 flags.go:64] FLAG: --pod-manifest-path="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782177 5016 flags.go:64] FLAG: --pod-max-pids="-1" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782181 5016 flags.go:64] FLAG: --pods-per-core="0" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782186 5016 flags.go:64] FLAG: --port="10250" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782190 5016 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782194 5016 flags.go:64] FLAG: --provider-id="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782198 5016 flags.go:64] FLAG: --qos-reserved="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782203 5016 flags.go:64] FLAG: --read-only-port="10255" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782207 5016 flags.go:64] FLAG: --register-node="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782211 5016 flags.go:64] FLAG: --register-schedulable="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782216 5016 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782225 5016 flags.go:64] FLAG: --registry-burst="10" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782229 5016 flags.go:64] FLAG: --registry-qps="5" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782233 5016 flags.go:64] FLAG: --reserved-cpus="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782238 5016 flags.go:64] FLAG: --reserved-memory="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782244 5016 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782248 5016 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782254 5016 flags.go:64] FLAG: --rotate-certificates="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782258 5016 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782263 5016 flags.go:64] FLAG: --runonce="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782267 5016 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782274 5016 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782279 5016 flags.go:64] FLAG: --seccomp-default="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782283 5016 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782288 5016 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782292 5016 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782297 5016 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782302 5016 flags.go:64] FLAG: --storage-driver-password="root" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782306 5016 flags.go:64] FLAG: --storage-driver-secure="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782311 5016 flags.go:64] FLAG: --storage-driver-table="stats" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782315 5016 flags.go:64] FLAG: --storage-driver-user="root" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782319 5016 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782324 5016 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782328 5016 flags.go:64] FLAG: --system-cgroups="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782334 5016 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782341 5016 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782345 5016 flags.go:64] FLAG: --tls-cert-file="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782349 5016 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782354 5016 flags.go:64] FLAG: --tls-min-version="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782358 5016 flags.go:64] FLAG: --tls-private-key-file="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782363 5016 flags.go:64] FLAG: --topology-manager-policy="none" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782367 5016 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782371 5016 flags.go:64] FLAG: --topology-manager-scope="container" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782376 5016 flags.go:64] FLAG: --v="2" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782385 5016 flags.go:64] FLAG: --version="false" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782392 5016 flags.go:64] FLAG: --vmodule="" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782398 5016 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782403 5016 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782518 5016 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782524 5016 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782528 5016 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782532 5016 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782536 5016 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782542 5016 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782551 5016 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782555 5016 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782559 5016 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782563 5016 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782566 5016 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782570 5016 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782573 5016 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782577 5016 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782581 5016 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782584 5016 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782588 5016 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782592 5016 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782595 5016 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782599 5016 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782602 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782606 5016 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782609 5016 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782613 5016 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782616 5016 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782620 5016 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782623 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782627 5016 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782630 5016 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782634 5016 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782637 5016 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782641 5016 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782645 5016 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782664 5016 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782670 5016 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782674 5016 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782678 5016 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782685 5016 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782691 5016 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782696 5016 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782700 5016 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782704 5016 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782709 5016 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782713 5016 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782719 5016 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782725 5016 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782730 5016 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782735 5016 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782740 5016 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782745 5016 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782750 5016 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782755 5016 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782760 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782764 5016 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782769 5016 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782773 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782777 5016 feature_gate.go:330] unrecognized feature gate: Example Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782781 5016 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782786 5016 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782790 5016 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782794 5016 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782798 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782802 5016 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782806 5016 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782810 5016 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782815 5016 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782819 5016 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782825 5016 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782830 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782838 5016 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.782847 5016 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.782865 5016 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.797775 5016 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.797828 5016 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.797976 5016 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.797990 5016 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798000 5016 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798008 5016 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798016 5016 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798024 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798032 5016 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798041 5016 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798048 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798056 5016 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798065 5016 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798073 5016 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798080 5016 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798089 5016 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798096 5016 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798104 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798115 5016 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798126 5016 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798135 5016 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798143 5016 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798154 5016 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798166 5016 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798176 5016 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798186 5016 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798194 5016 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798204 5016 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798212 5016 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798220 5016 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798228 5016 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798237 5016 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798245 5016 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798255 5016 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798263 5016 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798271 5016 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798279 5016 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798287 5016 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798297 5016 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798309 5016 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798317 5016 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798326 5016 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798334 5016 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798342 5016 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798349 5016 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798357 5016 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798365 5016 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798373 5016 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798381 5016 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798389 5016 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798397 5016 feature_gate.go:330] unrecognized feature gate: Example Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798405 5016 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798413 5016 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798420 5016 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798428 5016 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798436 5016 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798445 5016 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798453 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798461 5016 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798469 5016 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798477 5016 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798487 5016 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798496 5016 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798504 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798513 5016 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798522 5016 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798529 5016 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798538 5016 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798546 5016 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798554 5016 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798562 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798569 5016 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798578 5016 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.798591 5016 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798937 5016 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798955 5016 feature_gate.go:330] unrecognized feature gate: Example Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798965 5016 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798975 5016 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798983 5016 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.798992 5016 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799000 5016 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799008 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799017 5016 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799025 5016 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799033 5016 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799041 5016 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799049 5016 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799057 5016 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799065 5016 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799072 5016 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799080 5016 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799087 5016 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799095 5016 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799104 5016 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799111 5016 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799119 5016 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799127 5016 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799136 5016 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799144 5016 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799152 5016 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799159 5016 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799170 5016 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799181 5016 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799190 5016 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799199 5016 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799208 5016 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799216 5016 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799225 5016 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799234 5016 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799243 5016 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799251 5016 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799268 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799276 5016 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799285 5016 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799292 5016 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799301 5016 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799309 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799317 5016 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799325 5016 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799332 5016 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799340 5016 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799348 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799356 5016 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799363 5016 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799371 5016 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799381 5016 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799389 5016 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799397 5016 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799405 5016 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799414 5016 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799421 5016 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799429 5016 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799437 5016 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799444 5016 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799456 5016 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799466 5016 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799474 5016 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799483 5016 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799491 5016 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799502 5016 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799512 5016 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799521 5016 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799530 5016 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799538 5016 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 11 07:40:12 crc kubenswrapper[5016]: W1011 07:40:12.799546 5016 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.799558 5016 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.799856 5016 server.go:940] "Client rotation is on, will bootstrap in background" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.808592 5016 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.808825 5016 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.810689 5016 server.go:997] "Starting client certificate rotation" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.810736 5016 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.816028 5016 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-05 11:08:21.191001966 +0000 UTC Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.816130 5016 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1323h28m8.374874641s for next certificate rotation Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.888135 5016 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.894718 5016 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.925423 5016 log.go:25] "Validated CRI v1 runtime API" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.988351 5016 log.go:25] "Validated CRI v1 image API" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.990559 5016 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.998182 5016 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-11-07-28-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 11 07:40:12 crc kubenswrapper[5016]: I1011 07:40:12.998248 5016 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.030622 5016 manager.go:217] Machine: {Timestamp:2025-10-11 07:40:13.026945581 +0000 UTC m=+0.927401597 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:08126ab1-62b0-4804-a043-8168875482af BootID:f4122745-3248-41a5-a5a4-4bfabc330a61 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a9:8a:0d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a9:8a:0d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:99:85:8a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b4:28:87 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c6:09:d9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d6:2e:8f Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:ce:a6:56 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ca:db:00:3a:db:45 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f6:5a:26:e3:a4:51 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.031210 5016 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.031497 5016 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.033189 5016 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.033522 5016 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.033572 5016 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.034145 5016 topology_manager.go:138] "Creating topology manager with none policy" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.034167 5016 container_manager_linux.go:303] "Creating device plugin manager" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.036151 5016 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.036211 5016 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.037142 5016 state_mem.go:36] "Initialized new in-memory state store" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.037289 5016 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.041060 5016 kubelet.go:418] "Attempting to sync node with API server" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.041096 5016 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.041124 5016 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.041145 5016 kubelet.go:324] "Adding apiserver pod source" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.041164 5016 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.046133 5016 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.047378 5016 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.056243 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.056408 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.056421 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.056523 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.059371 5016 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061281 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061325 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061340 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061354 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061377 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061391 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061411 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061432 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061448 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061465 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061484 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.061498 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.062618 5016 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.063461 5016 server.go:1280] "Started kubelet" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.063895 5016 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:13 crc systemd[1]: Started Kubernetes Kubelet. Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.065661 5016 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.065789 5016 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.066539 5016 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.068083 5016 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.068122 5016 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.068512 5016 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.068556 5016 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.068556 5016 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.068618 5016 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.068805 5016 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:44:55.679161432 +0000 UTC Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.068867 5016 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1007h4m42.610302798s for next certificate rotation Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.069998 5016 factory.go:55] Registering systemd factory Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.070044 5016 factory.go:221] Registration of the systemd container factory successfully Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.070267 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.070523 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.070645 5016 factory.go:153] Registering CRI-O factory Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.070918 5016 factory.go:221] Registration of the crio container factory successfully Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.071181 5016 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.071370 5016 factory.go:103] Registering Raw factory Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.071539 5016 manager.go:1196] Started watching for new ooms in manager Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.072561 5016 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.072121 5016 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186d5fd88d821041 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-11 07:40:13.063417921 +0000 UTC m=+0.963873907,LastTimestamp:2025-10-11 07:40:13.063417921 +0000 UTC m=+0.963873907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.078819 5016 manager.go:319] Starting recovery of all containers Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.079089 5016 server.go:460] "Adding debug handlers to kubelet server" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092541 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092621 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092795 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092832 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092856 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092880 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092908 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092935 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092964 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.092988 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093016 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093042 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093067 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093094 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093120 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093148 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093175 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093198 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093225 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093253 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093276 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093301 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093327 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093353 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093378 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093403 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093444 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093692 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093721 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093747 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093817 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093847 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093920 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093945 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093969 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.093994 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.094019 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.094046 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.094069 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.094094 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.094119 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.094143 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.094168 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.095429 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.095473 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.095499 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.095523 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.095551 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.095575 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.095600 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.095625 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.096300 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.096441 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.096480 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.096508 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.096532 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.096558 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.096581 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.096603 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097261 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097326 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097350 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097370 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097391 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097410 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097429 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097448 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097498 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097523 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097546 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097571 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097597 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097621 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097648 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097859 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.097889 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.100933 5016 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101003 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101041 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101073 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101099 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101124 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101152 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101178 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101202 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101228 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101256 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101282 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101310 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101335 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101358 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101384 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101409 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101433 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101458 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101485 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101508 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101532 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101555 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101581 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101605 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101627 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101716 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101747 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101811 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101850 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101878 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101903 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101930 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101961 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.101991 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102017 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102042 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102069 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102094 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102117 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102138 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102157 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102175 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102212 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102231 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102250 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102271 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102291 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102310 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102329 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102347 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102366 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102384 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102403 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102423 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102447 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102471 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102495 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102519 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102543 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102561 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102580 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102599 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102618 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102636 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102751 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102781 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102812 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102832 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102852 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102870 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102887 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102906 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102924 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102942 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102961 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102979 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.102996 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103018 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103035 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103053 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103071 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103088 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103108 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103126 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103143 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103161 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103179 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103197 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103214 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103232 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103251 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103269 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103288 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103318 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103336 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103353 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103372 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103389 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103409 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103431 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103455 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103478 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103502 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103519 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103538 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103555 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103573 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103593 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103613 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103637 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103697 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103723 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103750 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103774 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103804 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103831 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103866 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103891 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103919 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103943 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103968 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.103991 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104015 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104054 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104073 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104098 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104117 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104136 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104156 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104173 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104202 5016 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104220 5016 reconstruct.go:97] "Volume reconstruction finished" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.104233 5016 reconciler.go:26] "Reconciler: start to sync state" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.116442 5016 manager.go:324] Recovery completed Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.129285 5016 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.131882 5016 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.131962 5016 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.132002 5016 kubelet.go:2335] "Starting kubelet main sync loop" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.132090 5016 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.132683 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.134871 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.134922 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.134934 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.135100 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.135193 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.135944 5016 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.135962 5016 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.135982 5016 state_mem.go:36] "Initialized new in-memory state store" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.168708 5016 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.174138 5016 policy_none.go:49] "None policy: Start" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.175356 5016 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.175395 5016 state_mem.go:35] "Initializing new in-memory state store" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.233117 5016 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.245324 5016 manager.go:334] "Starting Device Plugin manager" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.245417 5016 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.245439 5016 server.go:79] "Starting device plugin registration server" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.246209 5016 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.246243 5016 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.246441 5016 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.246604 5016 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.246628 5016 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.254410 5016 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.277552 5016 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.346992 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.348170 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.348221 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.348230 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.348262 5016 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.348814 5016 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.433340 5016 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.433548 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.435333 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.435402 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.435414 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.435707 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.436180 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.436282 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.437232 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.437293 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.437315 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.437736 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.437930 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.437968 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.438062 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.438105 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.438115 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.439161 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.439214 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.439257 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.439176 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.439442 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.439461 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.439708 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.439808 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.439846 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.440947 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.440976 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.440985 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.440987 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.441009 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.441022 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.441203 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.441331 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.441384 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.442149 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.442212 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.442231 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.442562 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.442612 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.442627 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.442713 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.442629 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.444092 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.444134 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.444152 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.509603 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.509687 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.509769 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.509835 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.509932 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.509966 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.509998 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.510030 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.510059 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.510088 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.510117 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.510149 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.510181 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.510210 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.510272 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.549372 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.551008 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.551070 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.551085 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.551118 5016 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.551817 5016 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612406 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612513 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612547 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612609 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612700 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612735 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612789 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612818 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612784 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612881 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612728 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612871 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612801 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612693 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.612993 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613078 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613209 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613256 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613280 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613163 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613210 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613313 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613347 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613427 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613484 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613485 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613524 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613573 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613584 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.613711 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.679374 5016 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.789323 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.824716 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.846538 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.858468 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8bad2d6aa009005c918e430be4183c6d36376166021b613d2741c91fecd7d014 WatchSource:0}: Error finding container 8bad2d6aa009005c918e430be4183c6d36376166021b613d2741c91fecd7d014: Status 404 returned error can't find the container with id 8bad2d6aa009005c918e430be4183c6d36376166021b613d2741c91fecd7d014 Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.870768 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-aae0c63fe405d440ea54ef43cdf385506069d5cac292d1c6cde423dac3645014 WatchSource:0}: Error finding container aae0c63fe405d440ea54ef43cdf385506069d5cac292d1c6cde423dac3645014: Status 404 returned error can't find the container with id aae0c63fe405d440ea54ef43cdf385506069d5cac292d1c6cde423dac3645014 Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.877714 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.878251 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-bc4d319a9eedb139be82c6b1cd6a8b69e9ef7b32ee1c3d2e8f39871ed161f7ba WatchSource:0}: Error finding container bc4d319a9eedb139be82c6b1cd6a8b69e9ef7b32ee1c3d2e8f39871ed161f7ba: Status 404 returned error can't find the container with id bc4d319a9eedb139be82c6b1cd6a8b69e9ef7b32ee1c3d2e8f39871ed161f7ba Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.882387 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.898086 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-cee5bab173a3f234006ad453a15686eb7bd584781985a0a8fa6a31d913c309cd WatchSource:0}: Error finding container cee5bab173a3f234006ad453a15686eb7bd584781985a0a8fa6a31d913c309cd: Status 404 returned error can't find the container with id cee5bab173a3f234006ad453a15686eb7bd584781985a0a8fa6a31d913c309cd Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.903453 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-ec502e41a12a4cba83af0d6be58d00814b71e903b6eec90a28d8140864fc55a2 WatchSource:0}: Error finding container ec502e41a12a4cba83af0d6be58d00814b71e903b6eec90a28d8140864fc55a2: Status 404 returned error can't find the container with id ec502e41a12a4cba83af0d6be58d00814b71e903b6eec90a28d8140864fc55a2 Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.952361 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.953695 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.953748 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.953761 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:13 crc kubenswrapper[5016]: I1011 07:40:13.953795 5016 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.954243 5016 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Oct 11 07:40:13 crc kubenswrapper[5016]: W1011 07:40:13.992792 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:13 crc kubenswrapper[5016]: E1011 07:40:13.992878 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.065581 5016 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:14 crc kubenswrapper[5016]: W1011 07:40:14.093055 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:14 crc kubenswrapper[5016]: E1011 07:40:14.093194 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:14 crc kubenswrapper[5016]: W1011 07:40:14.118542 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:14 crc kubenswrapper[5016]: E1011 07:40:14.118622 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.139934 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aae0c63fe405d440ea54ef43cdf385506069d5cac292d1c6cde423dac3645014"} Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.141764 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8bad2d6aa009005c918e430be4183c6d36376166021b613d2741c91fecd7d014"} Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.143590 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ec502e41a12a4cba83af0d6be58d00814b71e903b6eec90a28d8140864fc55a2"} Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.145214 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cee5bab173a3f234006ad453a15686eb7bd584781985a0a8fa6a31d913c309cd"} Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.146974 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bc4d319a9eedb139be82c6b1cd6a8b69e9ef7b32ee1c3d2e8f39871ed161f7ba"} Oct 11 07:40:14 crc kubenswrapper[5016]: W1011 07:40:14.172461 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:14 crc kubenswrapper[5016]: E1011 07:40:14.172598 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:14 crc kubenswrapper[5016]: E1011 07:40:14.480632 5016 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.754825 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.758056 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.758130 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.758156 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:14 crc kubenswrapper[5016]: I1011 07:40:14.758200 5016 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 11 07:40:14 crc kubenswrapper[5016]: E1011 07:40:14.758852 5016 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Oct 11 07:40:15 crc kubenswrapper[5016]: I1011 07:40:15.065347 5016 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:15 crc kubenswrapper[5016]: W1011 07:40:15.802518 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:15 crc kubenswrapper[5016]: E1011 07:40:15.802920 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:15 crc kubenswrapper[5016]: W1011 07:40:15.812901 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:15 crc kubenswrapper[5016]: E1011 07:40:15.812929 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:16 crc kubenswrapper[5016]: E1011 07:40:16.021513 5016 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186d5fd88d821041 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-11 07:40:13.063417921 +0000 UTC m=+0.963873907,LastTimestamp:2025-10-11 07:40:13.063417921 +0000 UTC m=+0.963873907,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.065110 5016 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:16 crc kubenswrapper[5016]: E1011 07:40:16.081982 5016 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="3.2s" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.154618 5016 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8c1012749b9a46e2d9c5c8d5c68341565417d0775c1eef26c49868833025b9f4" exitCode=0 Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.154723 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8c1012749b9a46e2d9c5c8d5c68341565417d0775c1eef26c49868833025b9f4"} Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.154780 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.156252 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.156291 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.156306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.157265 5016 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e" exitCode=0 Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.157313 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e"} Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.157411 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.158352 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.158403 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.158423 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.159945 5016 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="de2507ca9c487e1b435773006b5f21fbebe10d357449235d97aee3a26e44b545" exitCode=0 Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.160045 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"de2507ca9c487e1b435773006b5f21fbebe10d357449235d97aee3a26e44b545"} Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.160055 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.160578 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.161724 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.161767 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.161784 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.161903 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.161942 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.161966 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.164100 5016 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0" exitCode=0 Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.164155 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0"} Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.164213 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.165017 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.165045 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.165054 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.169687 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb"} Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.169744 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff"} Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.169807 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b"} Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.359115 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.360074 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.360109 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.360123 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:16 crc kubenswrapper[5016]: I1011 07:40:16.360145 5016 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 11 07:40:16 crc kubenswrapper[5016]: E1011 07:40:16.360535 5016 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.065269 5016 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:17 crc kubenswrapper[5016]: W1011 07:40:17.097470 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:17 crc kubenswrapper[5016]: E1011 07:40:17.097541 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:17 crc kubenswrapper[5016]: W1011 07:40:17.131276 5016 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:17 crc kubenswrapper[5016]: E1011 07:40:17.131353 5016 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.173041 5016 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="53680e6940fd960b9d2c9a869a22e5ff924278d9568b85c43bd75efc132ea41b" exitCode=0 Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.173108 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"53680e6940fd960b9d2c9a869a22e5ff924278d9568b85c43bd75efc132ea41b"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.173252 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.174113 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.174139 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.174151 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.178431 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ea6711eda1cb5ec3df8165e408d1f2109137c22c462c889e4abc01260ea44774"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.178462 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.178477 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.178491 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.178502 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.180146 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"516f350b59aaf3bf09a0f57f5e320b94e2d31b696055b3f1095b16fb6ca62bf6"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.180278 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.181226 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.181249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.181261 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.184646 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.184749 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.184772 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.184922 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.186421 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.186468 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.186486 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.197561 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792"} Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.197616 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.198577 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.198621 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.198637 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.710811 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.722139 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:17 crc kubenswrapper[5016]: I1011 07:40:17.853372 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.065331 5016 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.203474 5016 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="38f0ae441a39775e3374a96f8e4b67a984106d15e26131e2497005e2a943cfa0" exitCode=0 Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.203678 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.203708 5016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.203774 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.203858 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.203862 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.204285 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"38f0ae441a39775e3374a96f8e4b67a984106d15e26131e2497005e2a943cfa0"} Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.203782 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.204613 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.204642 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.204672 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.205537 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.205563 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.205571 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.205953 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.206015 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.206040 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.206170 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.206208 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.206222 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.206240 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.206227 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.206249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:18 crc kubenswrapper[5016]: I1011 07:40:18.405518 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.209731 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9a254c0c9bba5134be365b302387aa5e8e8b967bd7a1de6296ff65794008b619"} Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.209781 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"68f761e91aacc2169b31fc00efa2560bc5a0666c78e26ef690f0823b38be23ea"} Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.209790 5016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.209831 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.209799 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"abc8fd67e5193fa13acfb8c28921b9eca13de5e41e9171d5b4da2fe6bdb03af7"} Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.209879 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f92569074feebfa8ab70d6d422ca504f22c437e758a644feccce55ca35b0183d"} Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.209898 5016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.209959 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.210451 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.210474 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.210484 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.211457 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.211484 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.211496 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.561341 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.562563 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.562608 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.562624 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:19 crc kubenswrapper[5016]: I1011 07:40:19.562681 5016 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.217085 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9be3ac3f7b3ed23a620ff7aa8d90d78641b23e4fa9590cbdda8d9cb888711619"} Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.217112 5016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.217199 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.217227 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.218355 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.218427 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.218444 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.218487 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.218505 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.218454 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.405637 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.405925 5016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.405986 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.407561 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.407633 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:20 crc kubenswrapper[5016]: I1011 07:40:20.407707 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.141386 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.220733 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.222115 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.222199 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.222218 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.278085 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.278282 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.279917 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.279968 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.279987 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.406450 5016 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 07:40:21 crc kubenswrapper[5016]: I1011 07:40:21.406550 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.223369 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.224843 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.224909 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.224930 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.279212 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.279444 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.279638 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.279941 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.281093 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.281175 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.281201 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.281788 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.281842 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.281871 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.926629 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.926927 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.928449 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.928497 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:22 crc kubenswrapper[5016]: I1011 07:40:22.928505 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:23 crc kubenswrapper[5016]: E1011 07:40:23.254548 5016 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 11 07:40:28 crc kubenswrapper[5016]: I1011 07:40:28.714871 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 11 07:40:28 crc kubenswrapper[5016]: I1011 07:40:28.715317 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:28 crc kubenswrapper[5016]: I1011 07:40:28.717229 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:28 crc kubenswrapper[5016]: I1011 07:40:28.717272 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:28 crc kubenswrapper[5016]: I1011 07:40:28.717281 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:28 crc kubenswrapper[5016]: I1011 07:40:28.763714 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.065695 5016 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.143018 5016 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38736->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.143099 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38736->192.168.126.11:17697: read: connection reset by peer" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.243022 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.244883 5016 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ea6711eda1cb5ec3df8165e408d1f2109137c22c462c889e4abc01260ea44774" exitCode=255 Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.244969 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ea6711eda1cb5ec3df8165e408d1f2109137c22c462c889e4abc01260ea44774"} Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.245023 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.245095 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.246249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.246301 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.246249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.246317 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.246402 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.246439 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.247323 5016 scope.go:117] "RemoveContainer" containerID="ea6711eda1cb5ec3df8165e408d1f2109137c22c462c889e4abc01260ea44774" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.257550 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 11 07:40:29 crc kubenswrapper[5016]: E1011 07:40:29.282979 5016 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Oct 11 07:40:29 crc kubenswrapper[5016]: E1011 07:40:29.563978 5016 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.629886 5016 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.629992 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.634279 5016 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Oct 11 07:40:29 crc kubenswrapper[5016]: I1011 07:40:29.634350 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.250137 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.253212 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.253485 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00"} Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.253700 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.254618 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.254694 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.254713 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.255824 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.255892 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.255912 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.413506 5016 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]log ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]etcd ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/openshift.io-api-request-count-filter ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/openshift.io-startkubeinformers ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/generic-apiserver-start-informers ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/priority-and-fairness-config-consumer ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/priority-and-fairness-filter ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-apiextensions-informers ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-apiextensions-controllers ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/crd-informer-synced ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-system-namespaces-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-cluster-authentication-info-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-legacy-token-tracking-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-service-ip-repair-controllers ok Oct 11 07:40:30 crc kubenswrapper[5016]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Oct 11 07:40:30 crc kubenswrapper[5016]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/priority-and-fairness-config-producer ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/bootstrap-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/start-kube-aggregator-informers ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/apiservice-status-local-available-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/apiservice-status-remote-available-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/apiservice-registration-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/apiservice-wait-for-first-sync ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/apiservice-discovery-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/kube-apiserver-autoregistration ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]autoregister-completion ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/apiservice-openapi-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: [+]poststarthook/apiservice-openapiv3-controller ok Oct 11 07:40:30 crc kubenswrapper[5016]: livez check failed Oct 11 07:40:30 crc kubenswrapper[5016]: I1011 07:40:30.413566 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:40:31 crc kubenswrapper[5016]: I1011 07:40:31.278943 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:31 crc kubenswrapper[5016]: I1011 07:40:31.279121 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:31 crc kubenswrapper[5016]: I1011 07:40:31.280401 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:31 crc kubenswrapper[5016]: I1011 07:40:31.280442 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:31 crc kubenswrapper[5016]: I1011 07:40:31.280456 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:31 crc kubenswrapper[5016]: I1011 07:40:31.406564 5016 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 07:40:31 crc kubenswrapper[5016]: I1011 07:40:31.406633 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 11 07:40:32 crc kubenswrapper[5016]: I1011 07:40:32.285285 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:32 crc kubenswrapper[5016]: I1011 07:40:32.285513 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:32 crc kubenswrapper[5016]: I1011 07:40:32.286804 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:32 crc kubenswrapper[5016]: I1011 07:40:32.286855 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:32 crc kubenswrapper[5016]: I1011 07:40:32.286874 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:33 crc kubenswrapper[5016]: E1011 07:40:33.254730 5016 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 11 07:40:34 crc kubenswrapper[5016]: I1011 07:40:34.624841 5016 trace.go:236] Trace[651817794]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Oct-2025 07:40:21.901) (total time: 12723ms): Oct 11 07:40:34 crc kubenswrapper[5016]: Trace[651817794]: ---"Objects listed" error: 12723ms (07:40:34.624) Oct 11 07:40:34 crc kubenswrapper[5016]: Trace[651817794]: [12.723051013s] [12.723051013s] END Oct 11 07:40:34 crc kubenswrapper[5016]: I1011 07:40:34.624880 5016 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 11 07:40:34 crc kubenswrapper[5016]: I1011 07:40:34.626504 5016 trace.go:236] Trace[262813485]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Oct-2025 07:40:20.985) (total time: 13640ms): Oct 11 07:40:34 crc kubenswrapper[5016]: Trace[262813485]: ---"Objects listed" error: 13640ms (07:40:34.626) Oct 11 07:40:34 crc kubenswrapper[5016]: Trace[262813485]: [13.64076017s] [13.64076017s] END Oct 11 07:40:34 crc kubenswrapper[5016]: I1011 07:40:34.626525 5016 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 11 07:40:34 crc kubenswrapper[5016]: I1011 07:40:34.627448 5016 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 11 07:40:34 crc kubenswrapper[5016]: I1011 07:40:34.627458 5016 trace.go:236] Trace[599691228]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Oct-2025 07:40:21.731) (total time: 12896ms): Oct 11 07:40:34 crc kubenswrapper[5016]: Trace[599691228]: ---"Objects listed" error: 12896ms (07:40:34.627) Oct 11 07:40:34 crc kubenswrapper[5016]: Trace[599691228]: [12.896078081s] [12.896078081s] END Oct 11 07:40:34 crc kubenswrapper[5016]: I1011 07:40:34.627487 5016 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 11 07:40:34 crc kubenswrapper[5016]: I1011 07:40:34.627947 5016 trace.go:236] Trace[384394549]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Oct-2025 07:40:22.100) (total time: 12527ms): Oct 11 07:40:34 crc kubenswrapper[5016]: Trace[384394549]: ---"Objects listed" error: 12527ms (07:40:34.627) Oct 11 07:40:34 crc kubenswrapper[5016]: Trace[384394549]: [12.527211411s] [12.527211411s] END Oct 11 07:40:34 crc kubenswrapper[5016]: I1011 07:40:34.627968 5016 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.057718 5016 apiserver.go:52] "Watching apiserver" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.063511 5016 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.063896 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.064282 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.064541 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.064616 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.064681 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.064298 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.064890 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.065597 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.064914 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.065752 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.066480 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.066555 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.066735 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.067102 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.067585 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.067669 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.067805 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.067931 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.067816 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.069752 5016 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.098169 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.122701 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.129982 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.130177 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.130265 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.130347 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.130452 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.130900 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131000 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131103 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131279 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131470 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131572 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131683 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131791 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131942 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132036 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132131 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132203 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132344 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132440 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132533 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132629 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132742 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.130624 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.133371 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.133808 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.133996 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.134334 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.134526 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.134730 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.134898 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.134946 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.135054 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.135186 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.135239 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.135270 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.135399 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.133715 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.130829 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.130896 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131366 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131513 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131529 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131721 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131779 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131803 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.131992 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132176 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132238 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132541 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.135587 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.137897 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.138096 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.138205 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.132792 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.133310 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.133834 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.133897 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.134017 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.134356 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.134802 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.135045 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.135889 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.135950 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.136098 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.136262 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.136490 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.136805 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.136875 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.137278 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.137408 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.137997 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.138124 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.138585 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.139144 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.139280 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.139263 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.138420 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.139532 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.139567 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.139941 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.140437 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.140921 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.140998 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.141069 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.141140 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.141254 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.141276 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.140977 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.141312 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.141409 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.140255 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.141474 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143009 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143115 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.143181 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:40:35.643157007 +0000 UTC m=+23.543613023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143220 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143257 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143283 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143333 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143356 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143376 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143400 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143423 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143447 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143467 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143491 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143514 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143536 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143558 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143583 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143606 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143629 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143668 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143717 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143745 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143771 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143797 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143824 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143850 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143874 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143892 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143917 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.143986 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144075 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144107 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144134 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144160 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144188 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144203 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144215 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144247 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144275 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144299 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144350 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144379 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144406 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144448 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144475 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144499 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144525 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144550 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144572 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144593 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144614 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144632 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144635 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144710 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144734 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144756 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144779 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144796 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144813 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144831 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144847 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144864 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144883 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144899 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144916 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.144933 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145115 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145137 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145154 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145171 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145186 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145201 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145218 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145234 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145251 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145267 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145282 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145298 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145313 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145328 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145344 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145360 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145377 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145392 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145408 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145424 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145439 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145455 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145506 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145532 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145550 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145567 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145583 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145599 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145617 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145612 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145633 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145680 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145712 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145731 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145749 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145779 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145804 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145823 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145844 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145862 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145879 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145899 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145916 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145933 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145951 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145968 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.145992 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146013 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146029 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146045 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146064 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146081 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146098 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146115 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146132 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146152 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146170 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146189 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146206 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146225 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146244 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146260 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146279 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146299 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146316 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146332 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146350 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146367 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146383 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146399 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146442 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146464 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146481 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146498 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146518 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146537 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146554 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146572 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146587 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146606 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146627 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146669 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146696 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146720 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146775 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146806 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146836 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146863 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146884 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146905 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146924 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146946 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146968 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.146986 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.147005 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.147029 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.147047 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.147066 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.147170 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.147413 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.147540 5016 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.147626 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.148676 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.148879 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.149413 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.149464 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.149872 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.149957 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.150331 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.150370 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.150460 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.150543 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.150941 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.150839 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.150844 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.151287 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.151448 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.151702 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.151782 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.152102 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.152144 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.152265 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.152733 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.152767 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.152752 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.152922 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.153104 5016 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.153264 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.153366 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.153523 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.153606 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.153980 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.154073 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.154500 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.154570 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.154629 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.154643 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.154679 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.154718 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.154789 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.155263 5016 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.155363 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:35.655339611 +0000 UTC m=+23.555795557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.155991 5016 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.157000 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.157053 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.157124 5016 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.157192 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.157206 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.157512 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.157774 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.158377 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.158570 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.158631 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:35.658583263 +0000 UTC m=+23.559039209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.159568 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.159603 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.160596 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.160606 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.160808 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.160856 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.160882 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.160899 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.160948 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.160966 5016 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.160981 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161027 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161042 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161059 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161104 5016 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161120 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161135 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161149 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161199 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161216 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161230 5016 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161279 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161296 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161311 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161356 5016 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161373 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161387 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161403 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161453 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161468 5016 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161483 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161526 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161543 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161557 5016 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161603 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161617 5016 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161592 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161632 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161646 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161716 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161733 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161748 5016 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161761 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161775 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161818 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161825 5016 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161144 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161751 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.161874 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.162246 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.162412 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.162685 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.163025 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.163196 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.163479 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.163520 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.164188 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.164399 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.164535 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.164610 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.164709 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.164990 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.165054 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.165134 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.165231 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.165296 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.165600 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.165897 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.166017 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.166209 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.166280 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.166623 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.166704 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.167009 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.167038 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.167380 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.167416 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.167447 5016 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.167531 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:35.667497025 +0000 UTC m=+23.567952971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.167701 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.168014 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.168316 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.168629 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.168404 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.168977 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.169096 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.169114 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.169122 5016 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.169156 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:35.669146072 +0000 UTC m=+23.569602018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.169338 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.169489 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.169856 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.170470 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.170687 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.170961 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.171574 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.171856 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.172211 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.173465 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.173610 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.173714 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.173789 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.173716 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.174027 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.174339 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.174358 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.174425 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.174815 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.174822 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.175016 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.175545 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.175802 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.175813 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.176627 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.178178 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.178351 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.178546 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.178677 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.179018 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.181507 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.182804 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.185062 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.185073 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.186110 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.186154 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.186301 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.186554 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.186570 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.189014 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.189158 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.189287 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.189897 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.190267 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.190739 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.190804 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.190914 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.190965 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.191002 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.191030 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.191393 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.191414 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.191525 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.191669 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.191921 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.192131 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.192323 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.192370 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.192421 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.192784 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.192809 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.192781 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.192920 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.192928 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.193010 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.193488 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.193731 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.193811 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.193888 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.191840 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.194961 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.205588 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.227998 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.246489 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.247760 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.248822 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.266209 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.266464 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.266611 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.266713 5016 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.266829 5016 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.266905 5016 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.266982 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267054 5016 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267132 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267195 5016 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267263 5016 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267333 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267425 5016 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267499 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267568 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267636 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267755 5016 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267830 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.267898 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.268490 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.268573 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.268647 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.268733 5016 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.268817 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.268884 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.268122 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.268951 5016 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269025 5016 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269043 5016 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269058 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269073 5016 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269087 5016 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269100 5016 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269115 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269129 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269145 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269160 5016 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269176 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269187 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269198 5016 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269211 5016 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269223 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269236 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269248 5016 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269261 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269274 5016 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269288 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269302 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269314 5016 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269326 5016 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269338 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269351 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269363 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269376 5016 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269389 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269401 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269413 5016 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269426 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269439 5016 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269453 5016 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269466 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269480 5016 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269494 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269507 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269520 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269533 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269546 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269558 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269570 5016 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269594 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269608 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269624 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269637 5016 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269669 5016 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269683 5016 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269697 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269710 5016 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269723 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269735 5016 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269748 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269760 5016 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269772 5016 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269783 5016 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269795 5016 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269806 5016 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269820 5016 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269832 5016 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269843 5016 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269857 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269867 5016 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269878 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269891 5016 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269902 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269914 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269926 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269938 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269950 5016 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269961 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269974 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.269988 5016 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270000 5016 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270012 5016 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270025 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270036 5016 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270048 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270061 5016 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270073 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270085 5016 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270098 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270112 5016 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270125 5016 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270139 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270153 5016 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270167 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270181 5016 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270194 5016 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270206 5016 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270219 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270231 5016 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270242 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270254 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270267 5016 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270280 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270294 5016 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270310 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270325 5016 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270338 5016 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270352 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270365 5016 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270378 5016 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270391 5016 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270406 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270420 5016 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270435 5016 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270449 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270463 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270478 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270493 5016 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270506 5016 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270520 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270532 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270545 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270559 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270574 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270589 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270605 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270622 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270637 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270665 5016 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270680 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.270694 5016 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.268333 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.380705 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.388786 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.394426 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 11 07:40:35 crc kubenswrapper[5016]: W1011 07:40:35.403410 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b9cd272c402cbd216618037cfa00db9f1ebcfe165fe041ec35b69d29e82eded3 WatchSource:0}: Error finding container b9cd272c402cbd216618037cfa00db9f1ebcfe165fe041ec35b69d29e82eded3: Status 404 returned error can't find the container with id b9cd272c402cbd216618037cfa00db9f1ebcfe165fe041ec35b69d29e82eded3 Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.411274 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.433147 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.446029 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.463990 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.476967 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.488605 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.503418 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.513631 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.527775 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.546849 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.565580 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.578197 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.586771 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.595743 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.607124 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.674980 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.675065 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.675120 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675153 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:40:36.675128356 +0000 UTC m=+24.575584312 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675200 5016 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675240 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:36.675232079 +0000 UTC m=+24.575688025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.675251 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.675320 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675419 5016 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675429 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675470 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675487 5016 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675472 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:36.675455946 +0000 UTC m=+24.575911892 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675605 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:36.675563809 +0000 UTC m=+24.576019775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675791 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675811 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675822 5016 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.675886 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:36.675874897 +0000 UTC m=+24.576330933 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.964597 5016 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.965994 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.966031 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.966040 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.966098 5016 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.973588 5016 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.973722 5016 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.974797 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.974822 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.974830 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.974843 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.974852 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:35Z","lastTransitionTime":"2025-10-11T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:35 crc kubenswrapper[5016]: E1011 07:40:35.987488 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.991770 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.991806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.991817 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.991833 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:35 crc kubenswrapper[5016]: I1011 07:40:35.991845 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:35Z","lastTransitionTime":"2025-10-11T07:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.001354 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.004435 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.004497 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.004510 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.004528 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.004543 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.015568 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.019091 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.019119 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.019130 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.019145 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.019156 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.027891 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.032600 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.032644 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.032671 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.032686 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.032700 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.044083 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.044207 5016 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.045582 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.045609 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.045618 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.045630 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.045641 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.148453 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.148497 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.148508 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.148524 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.148541 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.186911 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xcmjb"] Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.187450 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lbbb2"] Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.187675 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.187733 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.188007 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-d7sp7"] Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.193820 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-49bvc"] Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.193929 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d7sp7" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.194309 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.193967 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.194742 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.198538 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.198782 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.198971 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.199211 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.199338 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.199550 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.199744 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.200189 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.200194 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.200368 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.200442 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.200808 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.201251 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.211274 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.225150 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.235113 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.244851 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6711eda1cb5ec3df8165e408d1f2109137c22c462c889e4abc01260ea44774\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:29Z\\\",\\\"message\\\":\\\"W1011 07:40:17.672177 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1011 07:40:17.672734 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760168417 cert, and key in /tmp/serving-cert-3024763147/serving-signer.crt, /tmp/serving-cert-3024763147/serving-signer.key\\\\nI1011 07:40:18.306960 1 observer_polling.go:159] Starting file observer\\\\nW1011 07:40:18.309635 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1011 07:40:18.309947 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:18.311046 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3024763147/tls.crt::/tmp/serving-cert-3024763147/tls.key\\\\\\\"\\\\nF1011 07:40:29.138922 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.251531 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.251562 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.251572 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.251586 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.251597 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.254882 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.265842 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.276415 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.277740 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.277777 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.277790 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c8fb1516bc76fb26cd34f80818b66e6d99691db5b0b16258369cddc684d8558d"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.278860 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b9cd272c402cbd216618037cfa00db9f1ebcfe165fe041ec35b69d29e82eded3"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.279887 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-cnibin\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.279916 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-system-cni-dir\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.279933 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0633ed26-7b6a-4a20-92ba-569891d9faff-proxy-tls\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.279952 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m72h\" (UniqueName: \"kubernetes.io/projected/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-kube-api-access-5m72h\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.279969 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-var-lib-cni-multus\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280048 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-run-multus-certs\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280132 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-os-release\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280154 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2khg4\" (UniqueName: \"kubernetes.io/projected/917a6581-31ec-4abc-9543-652c8295144f-kube-api-access-2khg4\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280184 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-run-k8s-cni-cncf-io\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280440 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-cni-dir\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280463 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-run-netns\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280483 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-conf-dir\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280499 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/917a6581-31ec-4abc-9543-652c8295144f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280517 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-system-cni-dir\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280534 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-daemon-config\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280554 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280586 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-var-lib-cni-bin\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280603 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-var-lib-kubelet\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280628 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/917a6581-31ec-4abc-9543-652c8295144f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280696 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-cnibin\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280719 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0633ed26-7b6a-4a20-92ba-569891d9faff-rootfs\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280736 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-hostroot\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280751 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-etc-kubernetes\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280768 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-os-release\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280892 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-cni-binary-copy\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.280919 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-socket-dir-parent\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.281161 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.281189 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e0ad4320510253e769ca1f8fc0bedfad6ff5bb4f27437b7e1a4dc9db9bada778"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.282559 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.283217 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.285233 5016 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00" exitCode=255 Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.285278 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.285363 5016 scope.go:117] "RemoveContainer" containerID="ea6711eda1cb5ec3df8165e408d1f2109137c22c462c889e4abc01260ea44774" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.285849 5016 scope.go:117] "RemoveContainer" containerID="b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00" Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.286067 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.286697 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.296319 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.308193 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.320016 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6711eda1cb5ec3df8165e408d1f2109137c22c462c889e4abc01260ea44774\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:29Z\\\",\\\"message\\\":\\\"W1011 07:40:17.672177 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1011 07:40:17.672734 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760168417 cert, and key in /tmp/serving-cert-3024763147/serving-signer.crt, /tmp/serving-cert-3024763147/serving-signer.key\\\\nI1011 07:40:18.306960 1 observer_polling.go:159] Starting file observer\\\\nW1011 07:40:18.309635 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1011 07:40:18.309947 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:18.311046 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3024763147/tls.crt::/tmp/serving-cert-3024763147/tls.key\\\\\\\"\\\\nF1011 07:40:29.138922 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.330375 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.338933 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.346903 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.354376 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.354447 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.354457 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.354471 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.354479 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.356495 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.363816 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.370033 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.378265 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381133 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-var-lib-kubelet\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381181 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0633ed26-7b6a-4a20-92ba-569891d9faff-mcd-auth-proxy-config\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381199 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-cnibin\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381217 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/917a6581-31ec-4abc-9543-652c8295144f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381232 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0633ed26-7b6a-4a20-92ba-569891d9faff-rootfs\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381247 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-cni-binary-copy\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381295 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-socket-dir-parent\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381285 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-var-lib-kubelet\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381325 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-hostroot\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381324 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0633ed26-7b6a-4a20-92ba-569891d9faff-rootfs\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381342 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-etc-kubernetes\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381374 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-etc-kubernetes\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381392 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-os-release\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381406 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-hostroot\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381405 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-socket-dir-parent\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381413 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-cnibin\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381456 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-system-cni-dir\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381462 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-os-release\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381484 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-cnibin\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381492 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0633ed26-7b6a-4a20-92ba-569891d9faff-proxy-tls\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381506 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-system-cni-dir\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381529 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m72h\" (UniqueName: \"kubernetes.io/projected/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-kube-api-access-5m72h\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381543 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-run-multus-certs\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381561 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-var-lib-cni-multus\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381576 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2khg4\" (UniqueName: \"kubernetes.io/projected/917a6581-31ec-4abc-9543-652c8295144f-kube-api-access-2khg4\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381591 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-os-release\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381607 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-run-k8s-cni-cncf-io\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381643 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-cni-dir\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381689 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-system-cni-dir\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381705 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-run-netns\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381722 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-conf-dir\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381741 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/917a6581-31ec-4abc-9543-652c8295144f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381758 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3bf81fbe-9695-4fc9-a46d-d7700f56e894-hosts-file\") pod \"node-resolver-d7sp7\" (UID: \"3bf81fbe-9695-4fc9-a46d-d7700f56e894\") " pod="openshift-dns/node-resolver-d7sp7" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381787 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-var-lib-cni-bin\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381801 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-daemon-config\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381816 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381833 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm9zd\" (UniqueName: \"kubernetes.io/projected/0633ed26-7b6a-4a20-92ba-569891d9faff-kube-api-access-rm9zd\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381848 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x962l\" (UniqueName: \"kubernetes.io/projected/3bf81fbe-9695-4fc9-a46d-d7700f56e894-kube-api-access-x962l\") pod \"node-resolver-d7sp7\" (UID: \"3bf81fbe-9695-4fc9-a46d-d7700f56e894\") " pod="openshift-dns/node-resolver-d7sp7" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381942 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/917a6581-31ec-4abc-9543-652c8295144f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.381986 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-var-lib-cni-bin\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382014 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-system-cni-dir\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382004 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-cnibin\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382029 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-run-netns\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382050 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-run-multus-certs\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382118 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-cni-dir\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382138 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-run-k8s-cni-cncf-io\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382209 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-cni-binary-copy\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382380 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-os-release\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382482 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-daemon-config\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382510 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-host-var-lib-cni-multus\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382538 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-multus-conf-dir\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382689 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/917a6581-31ec-4abc-9543-652c8295144f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.382827 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/917a6581-31ec-4abc-9543-652c8295144f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.384885 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0633ed26-7b6a-4a20-92ba-569891d9faff-proxy-tls\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.388990 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.396369 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2khg4\" (UniqueName: \"kubernetes.io/projected/917a6581-31ec-4abc-9543-652c8295144f-kube-api-access-2khg4\") pod \"multus-additional-cni-plugins-xcmjb\" (UID: \"917a6581-31ec-4abc-9543-652c8295144f\") " pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.398540 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.399216 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m72h\" (UniqueName: \"kubernetes.io/projected/48e55d9a-f690-40ae-ba16-e91c4d9d3a95-kube-api-access-5m72h\") pod \"multus-lbbb2\" (UID: \"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\") " pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.456634 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.456688 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.456702 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.456719 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.456731 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.482159 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3bf81fbe-9695-4fc9-a46d-d7700f56e894-hosts-file\") pod \"node-resolver-d7sp7\" (UID: \"3bf81fbe-9695-4fc9-a46d-d7700f56e894\") " pod="openshift-dns/node-resolver-d7sp7" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.482208 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm9zd\" (UniqueName: \"kubernetes.io/projected/0633ed26-7b6a-4a20-92ba-569891d9faff-kube-api-access-rm9zd\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.482249 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x962l\" (UniqueName: \"kubernetes.io/projected/3bf81fbe-9695-4fc9-a46d-d7700f56e894-kube-api-access-x962l\") pod \"node-resolver-d7sp7\" (UID: \"3bf81fbe-9695-4fc9-a46d-d7700f56e894\") " pod="openshift-dns/node-resolver-d7sp7" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.482273 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0633ed26-7b6a-4a20-92ba-569891d9faff-mcd-auth-proxy-config\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.482328 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/3bf81fbe-9695-4fc9-a46d-d7700f56e894-hosts-file\") pod \"node-resolver-d7sp7\" (UID: \"3bf81fbe-9695-4fc9-a46d-d7700f56e894\") " pod="openshift-dns/node-resolver-d7sp7" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.483048 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0633ed26-7b6a-4a20-92ba-569891d9faff-mcd-auth-proxy-config\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.501468 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm9zd\" (UniqueName: \"kubernetes.io/projected/0633ed26-7b6a-4a20-92ba-569891d9faff-kube-api-access-rm9zd\") pod \"machine-config-daemon-49bvc\" (UID: \"0633ed26-7b6a-4a20-92ba-569891d9faff\") " pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.501959 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x962l\" (UniqueName: \"kubernetes.io/projected/3bf81fbe-9695-4fc9-a46d-d7700f56e894-kube-api-access-x962l\") pod \"node-resolver-d7sp7\" (UID: \"3bf81fbe-9695-4fc9-a46d-d7700f56e894\") " pod="openshift-dns/node-resolver-d7sp7" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.528681 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lbbb2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.535769 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.542997 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-d7sp7" Oct 11 07:40:36 crc kubenswrapper[5016]: W1011 07:40:36.544833 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48e55d9a_f690_40ae_ba16_e91c4d9d3a95.slice/crio-0a7f4acffe410836070f5d78c8a6887ad732284a471ffc2eedf3ed86212ede29 WatchSource:0}: Error finding container 0a7f4acffe410836070f5d78c8a6887ad732284a471ffc2eedf3ed86212ede29: Status 404 returned error can't find the container with id 0a7f4acffe410836070f5d78c8a6887ad732284a471ffc2eedf3ed86212ede29 Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.546496 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-79nv2"] Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.548999 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.551050 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.551761 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.552598 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.552868 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.553003 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.553092 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.554230 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.554551 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 11 07:40:36 crc kubenswrapper[5016]: W1011 07:40:36.559160 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf81fbe_9695_4fc9_a46d_d7700f56e894.slice/crio-884c4baca140932cb01e8afec2a0530cf92e1eebf32d28792079f4e8e964bfed WatchSource:0}: Error finding container 884c4baca140932cb01e8afec2a0530cf92e1eebf32d28792079f4e8e964bfed: Status 404 returned error can't find the container with id 884c4baca140932cb01e8afec2a0530cf92e1eebf32d28792079f4e8e964bfed Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.559341 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.559371 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.559379 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.559392 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.559401 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.563293 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6711eda1cb5ec3df8165e408d1f2109137c22c462c889e4abc01260ea44774\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:29Z\\\",\\\"message\\\":\\\"W1011 07:40:17.672177 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1011 07:40:17.672734 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760168417 cert, and key in /tmp/serving-cert-3024763147/serving-signer.crt, /tmp/serving-cert-3024763147/serving-signer.key\\\\nI1011 07:40:18.306960 1 observer_polling.go:159] Starting file observer\\\\nW1011 07:40:18.309635 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1011 07:40:18.309947 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:18.311046 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3024763147/tls.crt::/tmp/serving-cert-3024763147/tls.key\\\\\\\"\\\\nF1011 07:40:29.138922 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: W1011 07:40:36.570786 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0633ed26_7b6a_4a20_92ba_569891d9faff.slice/crio-3d0a63400b900e3593f48ac8bbd7ea633be02e2f85713e37551e11b53c687a85 WatchSource:0}: Error finding container 3d0a63400b900e3593f48ac8bbd7ea633be02e2f85713e37551e11b53c687a85: Status 404 returned error can't find the container with id 3d0a63400b900e3593f48ac8bbd7ea633be02e2f85713e37551e11b53c687a85 Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.572212 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.582520 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.582805 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-slash\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.582880 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-ovn\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.582955 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583022 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-config\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583091 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-netns\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583180 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-ovn-kubernetes\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583248 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-systemd\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583462 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovn-node-metrics-cert\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583555 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-etc-openvswitch\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583714 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-var-lib-openvswitch\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583782 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-openvswitch\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583854 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-log-socket\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.583963 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-systemd-units\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.584038 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-netd\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.584112 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-bin\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.584183 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-env-overrides\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.584255 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-script-lib\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.584323 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg9zg\" (UniqueName: \"kubernetes.io/projected/68e9f942-5043-4fc3-9133-b608e8cd4ac0-kube-api-access-sg9zg\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.584393 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-kubelet\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.584490 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-node-log\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.594917 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.607456 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.627613 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.648939 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.659478 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.662034 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.662063 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.662072 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.662086 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.662096 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.667453 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.676831 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685274 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685400 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-bin\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685424 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-env-overrides\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685440 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-script-lib\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685455 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg9zg\" (UniqueName: \"kubernetes.io/projected/68e9f942-5043-4fc3-9133-b608e8cd4ac0-kube-api-access-sg9zg\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685490 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-kubelet\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685506 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-node-log\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685522 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685540 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685575 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-slash\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685589 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-ovn\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685604 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-config\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685637 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685664 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-netns\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685681 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685698 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685738 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-ovn-kubernetes\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685757 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-systemd\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685771 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovn-node-metrics-cert\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685809 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-etc-openvswitch\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685825 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-var-lib-openvswitch\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685840 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-openvswitch\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685854 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-log-socket\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685870 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-systemd-units\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685885 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-netd\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.685884 5016 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.685946 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:40:38.685932173 +0000 UTC m=+26.586388119 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685946 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-slash\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685964 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-ovn\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685984 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-bin\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.686012 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.685760 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.686402 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-netns\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.686445 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-env-overrides\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.686478 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-systemd-units\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.686510 5016 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.686533 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.686541 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:38.68653396 +0000 UTC m=+26.586989906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.686549 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.686560 5016 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.686625 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:38.686611422 +0000 UTC m=+26.587067368 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.686646 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-netd\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.686702 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-node-log\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.686764 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-ovn-kubernetes\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.686786 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-systemd\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.687180 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:38.687168358 +0000 UTC m=+26.587624304 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.687222 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-kubelet\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.687322 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-openvswitch\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.687372 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-var-lib-openvswitch\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.687478 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.687442 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-log-socket\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.687497 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.687566 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-etc-openvswitch\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.687574 5016 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:36 crc kubenswrapper[5016]: E1011 07:40:36.687672 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:38.687638362 +0000 UTC m=+26.588094428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.688105 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-script-lib\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.688866 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-config\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.692328 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovn-node-metrics-cert\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.695163 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.703955 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg9zg\" (UniqueName: \"kubernetes.io/projected/68e9f942-5043-4fc3-9133-b608e8cd4ac0-kube-api-access-sg9zg\") pod \"ovnkube-node-79nv2\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.763894 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.763935 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.763945 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.763958 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.763968 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.863019 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.866162 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.866226 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.866239 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.866255 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.866264 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:36 crc kubenswrapper[5016]: W1011 07:40:36.873726 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68e9f942_5043_4fc3_9133_b608e8cd4ac0.slice/crio-1f6d475bbbabab2501dc990b8ffde0a4dc42e20e3ea6c299608cfe052b770f83 WatchSource:0}: Error finding container 1f6d475bbbabab2501dc990b8ffde0a4dc42e20e3ea6c299608cfe052b770f83: Status 404 returned error can't find the container with id 1f6d475bbbabab2501dc990b8ffde0a4dc42e20e3ea6c299608cfe052b770f83 Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.969125 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.969175 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.969185 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.969199 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:36 crc kubenswrapper[5016]: I1011 07:40:36.969208 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:36Z","lastTransitionTime":"2025-10-11T07:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.077413 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.077465 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.077475 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.077492 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.077501 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.132751 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.132793 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:37 crc kubenswrapper[5016]: E1011 07:40:37.132917 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.132751 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:37 crc kubenswrapper[5016]: E1011 07:40:37.133028 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:37 crc kubenswrapper[5016]: E1011 07:40:37.133093 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.137041 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.137681 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.139128 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.139900 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.141102 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.141706 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.142394 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.143567 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.144324 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.145458 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.146051 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.147144 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.147621 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.148255 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.149176 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.149683 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.150760 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.151158 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.151707 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.152780 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.153399 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.154364 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.154924 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.155906 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.156296 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.157008 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.158045 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.158491 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.159570 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.160077 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.161054 5016 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.161181 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.162817 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.164004 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.164574 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.166030 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.166959 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.167635 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.168386 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.169859 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.170337 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.170964 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.171601 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.172194 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.172636 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.173211 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.173763 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.174450 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.174973 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.175405 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.175915 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.176621 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.177212 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.177682 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.179792 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.179839 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.179850 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.179866 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.179878 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.282353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.282390 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.282398 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.282412 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.282423 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.288422 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38" exitCode=0 Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.288482 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.288504 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"1f6d475bbbabab2501dc990b8ffde0a4dc42e20e3ea6c299608cfe052b770f83"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.291036 5016 generic.go:334] "Generic (PLEG): container finished" podID="917a6581-31ec-4abc-9543-652c8295144f" containerID="eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d" exitCode=0 Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.291129 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerDied","Data":"eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.291195 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerStarted","Data":"7152b39416be797b99f72cea0688c4370c17f17f7f62b71c7de1f7a4214217da"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.293736 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.293790 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.293806 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"3d0a63400b900e3593f48ac8bbd7ea633be02e2f85713e37551e11b53c687a85"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.295015 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lbbb2" event={"ID":"48e55d9a-f690-40ae-ba16-e91c4d9d3a95","Type":"ContainerStarted","Data":"39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.295050 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lbbb2" event={"ID":"48e55d9a-f690-40ae-ba16-e91c4d9d3a95","Type":"ContainerStarted","Data":"0a7f4acffe410836070f5d78c8a6887ad732284a471ffc2eedf3ed86212ede29"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.296035 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d7sp7" event={"ID":"3bf81fbe-9695-4fc9-a46d-d7700f56e894","Type":"ContainerStarted","Data":"6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.296074 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-d7sp7" event={"ID":"3bf81fbe-9695-4fc9-a46d-d7700f56e894","Type":"ContainerStarted","Data":"884c4baca140932cb01e8afec2a0530cf92e1eebf32d28792079f4e8e964bfed"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.297980 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.300996 5016 scope.go:117] "RemoveContainer" containerID="b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00" Oct 11 07:40:37 crc kubenswrapper[5016]: E1011 07:40:37.301209 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.302513 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.317108 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.329153 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.351919 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.366046 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.377535 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.385963 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.385996 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.386007 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.386022 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.386032 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.393800 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.408145 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.419902 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.433437 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6711eda1cb5ec3df8165e408d1f2109137c22c462c889e4abc01260ea44774\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:29Z\\\",\\\"message\\\":\\\"W1011 07:40:17.672177 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1011 07:40:17.672734 1 crypto.go:601] Generating new CA for check-endpoints-signer@1760168417 cert, and key in /tmp/serving-cert-3024763147/serving-signer.crt, /tmp/serving-cert-3024763147/serving-signer.key\\\\nI1011 07:40:18.306960 1 observer_polling.go:159] Starting file observer\\\\nW1011 07:40:18.309635 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1011 07:40:18.309947 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:18.311046 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3024763147/tls.crt::/tmp/serving-cert-3024763147/tls.key\\\\\\\"\\\\nF1011 07:40:29.138922 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.444638 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.455258 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.465091 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.468587 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.477539 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.486546 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.488021 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.488066 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.488080 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.488098 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.488112 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.505575 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.519367 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.530171 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.542410 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.552338 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.563256 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.575270 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.585086 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.590731 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.590766 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.590776 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.590793 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.590802 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.599027 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.692176 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.692225 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.692239 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.692258 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.692272 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.793999 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.794040 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.794053 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.794070 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.794082 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.896240 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.896289 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.896303 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.896319 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.896333 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.998234 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.998280 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.998290 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.998307 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:37 crc kubenswrapper[5016]: I1011 07:40:37.998319 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:37Z","lastTransitionTime":"2025-10-11T07:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.100293 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.100329 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.100339 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.100353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.100363 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:38Z","lastTransitionTime":"2025-10-11T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.205518 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.205547 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.205554 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.205569 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.205578 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:38Z","lastTransitionTime":"2025-10-11T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.304691 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.304734 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.304744 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.304755 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.304764 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.305995 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerStarted","Data":"3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.306901 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.306939 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.306950 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.306964 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.306974 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:38Z","lastTransitionTime":"2025-10-11T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.307932 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.308589 5016 scope.go:117] "RemoveContainer" containerID="b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00" Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.308768 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.325995 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.342252 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.357911 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.367753 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.378687 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.390701 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.401541 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.408686 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.409334 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.409360 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.409373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.409388 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.409399 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:38Z","lastTransitionTime":"2025-10-11T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.412637 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.415538 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.417542 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.429486 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.442198 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.454362 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.466455 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.478358 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.490587 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.501715 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.511392 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.511441 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.511452 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.511470 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.511482 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:38Z","lastTransitionTime":"2025-10-11T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.513263 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.526511 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.535673 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.555841 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.566587 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.575433 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.589689 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.604451 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.613248 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.613279 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.613289 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.613303 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.613312 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:38Z","lastTransitionTime":"2025-10-11T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.616699 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.627407 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:38Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.703981 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.704111 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.704147 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704175 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:40:42.70414273 +0000 UTC m=+30.604598736 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.704229 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704255 5016 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704270 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704292 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704304 5016 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704316 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:42.704299335 +0000 UTC m=+30.604755281 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704336 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:42.704327965 +0000 UTC m=+30.604784051 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.704276 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704343 5016 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704362 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704373 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704379 5016 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704386 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:42.704376747 +0000 UTC m=+30.604832793 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:38 crc kubenswrapper[5016]: E1011 07:40:38.704403 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:42.704394987 +0000 UTC m=+30.604850933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.714851 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.714884 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.714893 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.714905 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.714915 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:38Z","lastTransitionTime":"2025-10-11T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.817398 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.817772 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.817792 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.817816 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.817833 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:38Z","lastTransitionTime":"2025-10-11T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.920206 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.920249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.920261 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.920277 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:38 crc kubenswrapper[5016]: I1011 07:40:38.920286 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:38Z","lastTransitionTime":"2025-10-11T07:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.026466 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.026496 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.026508 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.026525 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.026536 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.129608 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.129689 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.129703 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.129721 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.129733 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.132892 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.132931 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:39 crc kubenswrapper[5016]: E1011 07:40:39.132997 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.133107 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:39 crc kubenswrapper[5016]: E1011 07:40:39.133397 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:39 crc kubenswrapper[5016]: E1011 07:40:39.133479 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.232926 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.232984 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.232999 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.233022 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.233037 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.316543 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.319527 5016 generic.go:334] "Generic (PLEG): container finished" podID="917a6581-31ec-4abc-9543-652c8295144f" containerID="3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df" exitCode=0 Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.319588 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerDied","Data":"3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.334749 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.334785 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.334794 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.334806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.334815 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.343462 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.357008 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.378468 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.397440 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.425457 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.436523 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.436571 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.436580 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.436594 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.436604 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.445059 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.462054 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.478502 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.492069 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.507858 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.518616 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.536743 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.539449 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.539505 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.539522 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.539553 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.539571 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.549627 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.642079 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.642116 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.642124 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.642138 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.642148 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.744535 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.744583 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.744598 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.744617 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.744632 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.827211 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-jk9cl"] Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.827760 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.830418 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.831293 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.832905 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.846452 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.846512 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.846526 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.846547 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.846559 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.850263 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.852235 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.871066 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.890378 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.909635 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.913219 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km9bc\" (UniqueName: \"kubernetes.io/projected/2a66833c-ffa6-4af6-9e15-90e24db9a290-kube-api-access-km9bc\") pod \"node-ca-jk9cl\" (UID: \"2a66833c-ffa6-4af6-9e15-90e24db9a290\") " pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.913349 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2a66833c-ffa6-4af6-9e15-90e24db9a290-serviceca\") pod \"node-ca-jk9cl\" (UID: \"2a66833c-ffa6-4af6-9e15-90e24db9a290\") " pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.913414 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a66833c-ffa6-4af6-9e15-90e24db9a290-host\") pod \"node-ca-jk9cl\" (UID: \"2a66833c-ffa6-4af6-9e15-90e24db9a290\") " pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.928859 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.946917 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.948868 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.948906 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.948918 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.948937 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.948949 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:39Z","lastTransitionTime":"2025-10-11T07:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.958379 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.974059 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:39 crc kubenswrapper[5016]: I1011 07:40:39.988276 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.000487 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:39Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.011051 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.013799 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a66833c-ffa6-4af6-9e15-90e24db9a290-host\") pod \"node-ca-jk9cl\" (UID: \"2a66833c-ffa6-4af6-9e15-90e24db9a290\") " pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.013856 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km9bc\" (UniqueName: \"kubernetes.io/projected/2a66833c-ffa6-4af6-9e15-90e24db9a290-kube-api-access-km9bc\") pod \"node-ca-jk9cl\" (UID: \"2a66833c-ffa6-4af6-9e15-90e24db9a290\") " pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.013909 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2a66833c-ffa6-4af6-9e15-90e24db9a290-serviceca\") pod \"node-ca-jk9cl\" (UID: \"2a66833c-ffa6-4af6-9e15-90e24db9a290\") " pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.013939 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2a66833c-ffa6-4af6-9e15-90e24db9a290-host\") pod \"node-ca-jk9cl\" (UID: \"2a66833c-ffa6-4af6-9e15-90e24db9a290\") " pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.024024 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.034383 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.035711 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km9bc\" (UniqueName: \"kubernetes.io/projected/2a66833c-ffa6-4af6-9e15-90e24db9a290-kube-api-access-km9bc\") pod \"node-ca-jk9cl\" (UID: \"2a66833c-ffa6-4af6-9e15-90e24db9a290\") " pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.039415 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2a66833c-ffa6-4af6-9e15-90e24db9a290-serviceca\") pod \"node-ca-jk9cl\" (UID: \"2a66833c-ffa6-4af6-9e15-90e24db9a290\") " pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.046876 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.051820 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.051916 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.051941 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.052400 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.052689 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.146281 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jk9cl" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.154943 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.154985 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.154998 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.155013 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.155025 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:40 crc kubenswrapper[5016]: W1011 07:40:40.164297 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a66833c_ffa6_4af6_9e15_90e24db9a290.slice/crio-399a1b7b276a7b29f59c775f73de4e7ceb1849adbedc0755d0264cce5f0edfa7 WatchSource:0}: Error finding container 399a1b7b276a7b29f59c775f73de4e7ceb1849adbedc0755d0264cce5f0edfa7: Status 404 returned error can't find the container with id 399a1b7b276a7b29f59c775f73de4e7ceb1849adbedc0755d0264cce5f0edfa7 Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.258630 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.258994 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.259015 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.259044 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.259068 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.323094 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jk9cl" event={"ID":"2a66833c-ffa6-4af6-9e15-90e24db9a290","Type":"ContainerStarted","Data":"399a1b7b276a7b29f59c775f73de4e7ceb1849adbedc0755d0264cce5f0edfa7"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.325628 5016 generic.go:334] "Generic (PLEG): container finished" podID="917a6581-31ec-4abc-9543-652c8295144f" containerID="dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950" exitCode=0 Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.325676 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerDied","Data":"dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.338338 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.350967 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.365181 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.367170 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.367222 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.367236 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.367250 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.367261 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.376850 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.390120 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.404009 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.414730 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.442847 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.454397 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.467810 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.469325 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.469377 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.469397 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.469425 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.469441 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.483839 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.499566 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.512535 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.525166 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:40Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.571719 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.571778 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.571795 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.571818 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.571832 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.675287 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.675346 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.675360 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.675390 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.675616 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.779396 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.779695 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.779718 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.779744 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.779757 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.884007 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.884064 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.884083 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.884107 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.884124 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.986576 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.986704 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.986729 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.986764 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:40 crc kubenswrapper[5016]: I1011 07:40:40.986790 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:40Z","lastTransitionTime":"2025-10-11T07:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.089198 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.089234 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.089243 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.089257 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.089267 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:41Z","lastTransitionTime":"2025-10-11T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.133205 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.133280 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:41 crc kubenswrapper[5016]: E1011 07:40:41.133318 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.133280 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:41 crc kubenswrapper[5016]: E1011 07:40:41.133468 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:41 crc kubenswrapper[5016]: E1011 07:40:41.133830 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.191605 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.191636 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.191643 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.191689 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.191704 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:41Z","lastTransitionTime":"2025-10-11T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.294276 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.294377 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.294423 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.294463 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.294487 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:41Z","lastTransitionTime":"2025-10-11T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.336616 5016 generic.go:334] "Generic (PLEG): container finished" podID="917a6581-31ec-4abc-9543-652c8295144f" containerID="6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6" exitCode=0 Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.336949 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerDied","Data":"6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.339374 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jk9cl" event={"ID":"2a66833c-ffa6-4af6-9e15-90e24db9a290","Type":"ContainerStarted","Data":"f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.345488 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.358240 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.384555 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.399910 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.399960 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.399980 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.400672 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.400808 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:41Z","lastTransitionTime":"2025-10-11T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.405828 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.425757 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.447280 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.465510 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.482413 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.495837 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.503427 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.503462 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.503472 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.503489 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.503497 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:41Z","lastTransitionTime":"2025-10-11T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.512669 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.528631 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.542084 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.556344 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.571285 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.586425 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.605054 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.606438 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.606474 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.606487 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.606538 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.606552 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:41Z","lastTransitionTime":"2025-10-11T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.620209 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.632003 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.642698 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.656150 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.668886 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.682532 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.695774 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.707931 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.709702 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.709741 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.709753 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.709771 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.709784 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:41Z","lastTransitionTime":"2025-10-11T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.718336 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.732418 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.752344 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.782031 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.803379 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:41Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.811923 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.811966 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.811975 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.811990 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.812000 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:41Z","lastTransitionTime":"2025-10-11T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.914319 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.914357 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.914366 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.914379 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:41 crc kubenswrapper[5016]: I1011 07:40:41.914388 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:41Z","lastTransitionTime":"2025-10-11T07:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.017390 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.017428 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.017439 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.017452 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.017463 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.120513 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.120565 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.120583 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.120643 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.120686 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.223278 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.223327 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.223343 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.223366 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.223381 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.326885 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.326958 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.326985 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.327019 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.327043 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.352044 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerStarted","Data":"507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.373438 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.388253 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.416500 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.429726 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.430001 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.430365 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.430989 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.431019 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.432611 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.450012 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.466181 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.481335 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.493371 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.505051 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.515844 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.526832 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.533814 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.534194 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.534288 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.534364 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.534449 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.537486 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.550164 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.562184 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:42Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.637550 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.637617 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.637639 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.637702 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.637725 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.740351 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.740403 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.740420 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.740438 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.740448 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.747040 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.747186 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.747233 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.747274 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.747314 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747347 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747367 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747379 5016 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747438 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:40:50.747384297 +0000 UTC m=+38.647840283 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747478 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747488 5016 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747495 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:50.7474794 +0000 UTC m=+38.647935386 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747550 5016 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747577 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:50.747564263 +0000 UTC m=+38.648020249 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747506 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747628 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:50.747605814 +0000 UTC m=+38.648061800 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747635 5016 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:42 crc kubenswrapper[5016]: E1011 07:40:42.747781 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-11 07:40:50.747760678 +0000 UTC m=+38.648216784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.843195 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.843224 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.843233 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.843245 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.843253 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.946066 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.946115 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.946127 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.946197 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:42 crc kubenswrapper[5016]: I1011 07:40:42.946212 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:42Z","lastTransitionTime":"2025-10-11T07:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.047666 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.047700 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.047711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.047726 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.047737 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.132985 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.133029 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:43 crc kubenswrapper[5016]: E1011 07:40:43.133458 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.133151 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:43 crc kubenswrapper[5016]: E1011 07:40:43.133462 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:43 crc kubenswrapper[5016]: E1011 07:40:43.133706 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.149920 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.149986 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.149998 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.150017 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.150031 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.150979 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.165300 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.179765 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.204371 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.221318 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.232888 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.245804 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.251560 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.251599 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.251608 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.251622 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.251631 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.262511 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.275044 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.294481 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.308166 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.319115 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.331189 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.343722 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.354523 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.354559 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.354568 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.354582 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.354592 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.358974 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.359228 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.359375 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.359417 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.364665 5016 generic.go:334] "Generic (PLEG): container finished" podID="917a6581-31ec-4abc-9543-652c8295144f" containerID="507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe" exitCode=0 Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.364702 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerDied","Data":"507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.376037 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.389063 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.391690 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.391825 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.401967 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.413884 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.424008 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.435932 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.448849 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.456711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.456743 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.456752 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.456766 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.456776 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.459449 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.472297 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.482828 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.494183 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.508108 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.518385 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.535454 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.549377 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.559830 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.560063 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.560081 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.560148 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.560161 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.561062 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.578931 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.593899 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.606335 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.618212 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.630898 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.643647 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.655383 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.662419 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.662451 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.662464 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.662479 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.662490 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.666711 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.678204 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.688543 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.699177 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.707284 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.765901 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.765947 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.765960 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.765978 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.765992 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.868050 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.868093 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.868105 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.868123 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.868134 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.971373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.971439 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.971462 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.971493 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:43 crc kubenswrapper[5016]: I1011 07:40:43.971516 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:43Z","lastTransitionTime":"2025-10-11T07:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.074537 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.074602 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.074634 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.074687 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.074709 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.178156 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.178208 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.178225 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.178248 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.178267 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.280521 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.280573 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.280588 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.280614 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.280689 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.375616 5016 generic.go:334] "Generic (PLEG): container finished" podID="917a6581-31ec-4abc-9543-652c8295144f" containerID="b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de" exitCode=0 Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.375707 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerDied","Data":"b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.383572 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.383610 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.383668 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.383689 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.383721 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.398512 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.414407 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.434761 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.452137 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.467289 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.479949 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.485838 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.485871 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.485883 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.485898 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.485910 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.497397 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.512099 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.531782 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.545243 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.562922 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.573927 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.586912 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.587793 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.587825 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.587835 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.587851 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.587864 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.597671 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:44Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.689967 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.690015 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.690025 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.690040 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.690049 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.793000 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.793041 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.793051 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.793073 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.793085 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.895336 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.895372 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.895383 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.895399 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.895412 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.997279 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.997309 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.997317 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.997330 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:44 crc kubenswrapper[5016]: I1011 07:40:44.997338 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:44Z","lastTransitionTime":"2025-10-11T07:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.099409 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.099444 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.099455 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.099472 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.099484 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:45Z","lastTransitionTime":"2025-10-11T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.135042 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:45 crc kubenswrapper[5016]: E1011 07:40:45.135171 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.135538 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:45 crc kubenswrapper[5016]: E1011 07:40:45.135597 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.135685 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:45 crc kubenswrapper[5016]: E1011 07:40:45.135742 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.201754 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.201798 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.201806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.201821 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.201831 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:45Z","lastTransitionTime":"2025-10-11T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.305041 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.305091 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.305105 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.305124 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.305137 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:45Z","lastTransitionTime":"2025-10-11T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.382436 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" event={"ID":"917a6581-31ec-4abc-9543-652c8295144f","Type":"ContainerStarted","Data":"db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.397369 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.408232 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.408287 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.408296 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.408310 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.408320 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:45Z","lastTransitionTime":"2025-10-11T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.418001 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.432250 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.445402 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.461852 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.474060 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.494186 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.503852 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.510212 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.510249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.510257 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.510271 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.510281 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:45Z","lastTransitionTime":"2025-10-11T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.523803 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.540911 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.553271 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.565411 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.576695 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.585300 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:45Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.613318 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.613354 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.613364 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.613380 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.613392 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:45Z","lastTransitionTime":"2025-10-11T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.715560 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.715594 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.715604 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.715617 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.715625 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:45Z","lastTransitionTime":"2025-10-11T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.818797 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.818915 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.818940 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.818969 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.818992 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:45Z","lastTransitionTime":"2025-10-11T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.921849 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.921907 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.921920 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.921941 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:45 crc kubenswrapper[5016]: I1011 07:40:45.921954 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:45Z","lastTransitionTime":"2025-10-11T07:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.025363 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.025421 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.025439 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.025461 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.025479 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.127936 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.128000 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.128019 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.128045 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.128065 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.185989 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.186071 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.186095 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.186124 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.186142 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: E1011 07:40:46.203220 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:46Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.207730 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.207766 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.207777 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.207796 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.207808 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: E1011 07:40:46.222407 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:46Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.226306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.226351 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.226363 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.226378 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.226387 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: E1011 07:40:46.238043 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:46Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.242164 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.242223 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.242236 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.242260 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.242277 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: E1011 07:40:46.297965 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:46Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.303252 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.303298 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.303312 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.303331 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.303352 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: E1011 07:40:46.321244 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:46Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:46 crc kubenswrapper[5016]: E1011 07:40:46.321403 5016 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.323223 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.323283 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.323301 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.323326 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.323344 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.426286 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.426332 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.426342 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.426358 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.426370 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.528923 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.529160 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.529234 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.529311 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.529371 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.631233 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.631282 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.631295 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.631312 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.631324 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.733403 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.733439 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.733450 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.733466 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.733478 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.835446 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.835490 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.835501 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.835518 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.835531 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.938351 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.938423 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.938438 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.938462 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:46 crc kubenswrapper[5016]: I1011 07:40:46.938478 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:46Z","lastTransitionTime":"2025-10-11T07:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.041159 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.041207 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.041218 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.041235 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.041248 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.132572 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.132795 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:47 crc kubenswrapper[5016]: E1011 07:40:47.132870 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:47 crc kubenswrapper[5016]: E1011 07:40:47.132791 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.132919 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:47 crc kubenswrapper[5016]: E1011 07:40:47.132989 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.143016 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.143045 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.143053 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.143066 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.143074 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.245285 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.245354 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.245372 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.245406 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.245423 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.349306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.349357 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.349369 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.349389 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.349400 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.391008 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/0.log" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.396330 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8" exitCode=1 Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.396405 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.398055 5016 scope.go:117] "RemoveContainer" containerID="131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.418290 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.431871 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.452162 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.452215 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.452265 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.452285 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.452299 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.454910 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.469703 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.489695 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.513405 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.528342 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.551791 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:47Z\\\",\\\"message\\\":\\\" 6304 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151277 6304 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151614 6304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151845 6304 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151995 6304 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152100 6304 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152549 6304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1011 07:40:47.152597 6304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1011 07:40:47.152605 6304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1011 07:40:47.152628 6304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1011 07:40:47.152640 6304 factory.go:656] Stopping watch factory\\\\nI1011 07:40:47.152639 6304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1011 07:40:47.152693 6304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.554447 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.554474 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.554483 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.554495 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.554504 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.569195 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.586424 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.602649 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.618772 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.635326 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.646309 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.657022 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.657072 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.657084 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.657102 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.657113 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.759548 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.759598 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.759609 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.759627 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.759638 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.862020 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.862051 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.862061 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.862075 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.862086 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.964285 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.964318 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.964327 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.964339 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:47 crc kubenswrapper[5016]: I1011 07:40:47.964349 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:47Z","lastTransitionTime":"2025-10-11T07:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.066706 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.066748 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.066760 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.066776 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.066787 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.169083 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.169327 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.169396 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.169461 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.169517 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.271842 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.271873 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.271882 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.271894 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.271906 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.374944 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.375183 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.375253 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.375335 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.375408 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.403210 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/0.log" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.407390 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.407968 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.420544 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.435308 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f"] Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.436128 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.438030 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.439431 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.443551 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.454791 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.467038 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.477752 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.477920 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.478003 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.478105 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.478179 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.481395 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.496106 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.501341 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnql6\" (UniqueName: \"kubernetes.io/projected/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-kube-api-access-gnql6\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.501453 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.501560 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.502186 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.512548 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.523732 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.544963 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:47Z\\\",\\\"message\\\":\\\" 6304 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151277 6304 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151614 6304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151845 6304 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151995 6304 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152100 6304 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152549 6304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1011 07:40:47.152597 6304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1011 07:40:47.152605 6304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1011 07:40:47.152628 6304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1011 07:40:47.152640 6304 factory.go:656] Stopping watch factory\\\\nI1011 07:40:47.152639 6304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1011 07:40:47.152693 6304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.556529 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.564628 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.575237 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.579843 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.579891 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.579902 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.579914 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.579922 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.587308 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.597435 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.603730 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnql6\" (UniqueName: \"kubernetes.io/projected/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-kube-api-access-gnql6\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.603771 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.603801 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.603863 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.604415 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.604597 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.606891 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.611269 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.618052 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnql6\" (UniqueName: \"kubernetes.io/projected/ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99-kube-api-access-gnql6\") pod \"ovnkube-control-plane-749d76644c-2r66f\" (UID: \"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.617958 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.629922 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.640266 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.648375 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.658113 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.669292 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.678305 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.690859 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.690891 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.690900 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.690914 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.690922 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.693949 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:47Z\\\",\\\"message\\\":\\\" 6304 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151277 6304 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151614 6304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151845 6304 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151995 6304 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152100 6304 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152549 6304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1011 07:40:47.152597 6304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1011 07:40:47.152605 6304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1011 07:40:47.152628 6304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1011 07:40:47.152640 6304 factory.go:656] Stopping watch factory\\\\nI1011 07:40:47.152639 6304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1011 07:40:47.152693 6304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.704127 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.716293 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.726802 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.739110 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.749991 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.754189 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.766053 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.793309 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.793361 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.793375 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.793395 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.793411 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.896098 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.896140 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.896150 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.896164 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.896177 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.998876 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.998927 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.998936 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.998950 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:48 crc kubenswrapper[5016]: I1011 07:40:48.998960 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:48Z","lastTransitionTime":"2025-10-11T07:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.102032 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.102095 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.102115 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.102144 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.102163 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:49Z","lastTransitionTime":"2025-10-11T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.133442 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.133517 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:49 crc kubenswrapper[5016]: E1011 07:40:49.133626 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.133544 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:49 crc kubenswrapper[5016]: E1011 07:40:49.133774 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:49 crc kubenswrapper[5016]: E1011 07:40:49.133895 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.205525 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.205588 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.205604 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.205628 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.205640 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:49Z","lastTransitionTime":"2025-10-11T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.308619 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.308697 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.308711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.308733 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.308753 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:49Z","lastTransitionTime":"2025-10-11T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.411469 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.411548 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.411570 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.411597 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.411617 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:49Z","lastTransitionTime":"2025-10-11T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.413101 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" event={"ID":"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99","Type":"ContainerStarted","Data":"1eed64d6afb62e7bcf106aa1e82dbf7b57164e0c8781d6ac418cdf8c375de9ba"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.415751 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/1.log" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.416579 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/0.log" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.420860 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3" exitCode=1 Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.420921 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.420975 5016 scope.go:117] "RemoveContainer" containerID="131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.422121 5016 scope.go:117] "RemoveContainer" containerID="73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3" Oct 11 07:40:49 crc kubenswrapper[5016]: E1011 07:40:49.422368 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.438481 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.456110 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.470375 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.485498 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.497931 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.515298 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.515359 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.515374 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.515397 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.515414 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:49Z","lastTransitionTime":"2025-10-11T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.517822 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.536794 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.551726 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.570688 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.585197 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.604772 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.617698 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.617746 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.617757 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.617778 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.617792 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:49Z","lastTransitionTime":"2025-10-11T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.619126 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.636346 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.652411 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.676103 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:47Z\\\",\\\"message\\\":\\\" 6304 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151277 6304 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151614 6304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151845 6304 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151995 6304 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152100 6304 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152549 6304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1011 07:40:47.152597 6304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1011 07:40:47.152605 6304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1011 07:40:47.152628 6304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1011 07:40:47.152640 6304 factory.go:656] Stopping watch factory\\\\nI1011 07:40:47.152639 6304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1011 07:40:47.152693 6304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.720438 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.720475 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.720483 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.720499 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.720509 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:49Z","lastTransitionTime":"2025-10-11T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.823402 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.823441 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.823453 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.823472 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.823483 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:49Z","lastTransitionTime":"2025-10-11T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.900095 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-459lg"] Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.900596 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:49 crc kubenswrapper[5016]: E1011 07:40:49.900687 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.913521 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.926201 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.926249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.926262 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.926279 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.926290 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:49Z","lastTransitionTime":"2025-10-11T07:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.927720 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.940311 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.953141 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.963249 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.974018 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.985355 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:49 crc kubenswrapper[5016]: I1011 07:40:49.997984 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:49Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.008318 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.018353 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.018409 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc8pn\" (UniqueName: \"kubernetes.io/projected/9ceaf34e-81b3-457f-8f03-d807f795392b-kube-api-access-tc8pn\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.026758 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://131a07f8dd3701fb9b16ecc053dd954934f9c5c380a9a6d97099ebe1d7c570c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:47Z\\\",\\\"message\\\":\\\" 6304 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151277 6304 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151614 6304 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151845 6304 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.151995 6304 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152100 6304 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1011 07:40:47.152549 6304 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1011 07:40:47.152597 6304 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1011 07:40:47.152605 6304 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1011 07:40:47.152628 6304 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1011 07:40:47.152640 6304 factory.go:656] Stopping watch factory\\\\nI1011 07:40:47.152639 6304 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1011 07:40:47.152693 6304 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.028271 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.028326 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.028341 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.028363 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.028380 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.037978 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.049259 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.062010 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.073814 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.084690 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.094756 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.119194 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.119246 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc8pn\" (UniqueName: \"kubernetes.io/projected/9ceaf34e-81b3-457f-8f03-d807f795392b-kube-api-access-tc8pn\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.119328 5016 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.119405 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs podName:9ceaf34e-81b3-457f-8f03-d807f795392b nodeName:}" failed. No retries permitted until 2025-10-11 07:40:50.619386407 +0000 UTC m=+38.519842453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs") pod "network-metrics-daemon-459lg" (UID: "9ceaf34e-81b3-457f-8f03-d807f795392b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.135870 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.135919 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.135933 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.135949 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.135884 5016 scope.go:117] "RemoveContainer" containerID="b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.135963 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.138408 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc8pn\" (UniqueName: \"kubernetes.io/projected/9ceaf34e-81b3-457f-8f03-d807f795392b-kube-api-access-tc8pn\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.239402 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.239442 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.239451 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.239500 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.239509 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.342221 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.342258 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.342268 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.342285 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.342297 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.425797 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/1.log" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.431061 5016 scope.go:117] "RemoveContainer" containerID="73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3" Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.431410 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.431758 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" event={"ID":"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99","Type":"ContainerStarted","Data":"42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.431801 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" event={"ID":"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99","Type":"ContainerStarted","Data":"73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.435747 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.438900 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.439461 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.444496 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.444536 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.444553 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.444569 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.444580 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.447439 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.465139 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.478306 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.490606 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.504630 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.519418 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.530733 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.541900 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.546998 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.547036 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.547048 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.547065 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.547078 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.554069 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.566255 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.580220 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.600774 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.611869 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.623647 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.623913 5016 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.624041 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs podName:9ceaf34e-81b3-457f-8f03-d807f795392b nodeName:}" failed. No retries permitted until 2025-10-11 07:40:51.624005673 +0000 UTC m=+39.524461659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs") pod "network-metrics-daemon-459lg" (UID: "9ceaf34e-81b3-457f-8f03-d807f795392b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.624931 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.641974 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.649974 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.650017 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.650035 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.650058 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.650076 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.658517 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.679545 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.698705 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.723605 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.733932 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.745159 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.752536 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.752615 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.752639 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.752713 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.752753 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.763983 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.774009 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.789286 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.799463 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.814533 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.825124 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.825235 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.825271 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.825299 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825360 5016 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825381 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:41:06.825342881 +0000 UTC m=+54.725798867 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825426 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:41:06.825412373 +0000 UTC m=+54.725868349 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825475 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825525 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825546 5016 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825568 5016 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825620 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.825475 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825646 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825751 5016 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825620 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-11 07:41:06.825593308 +0000 UTC m=+54.726049324 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825812 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:41:06.825798104 +0000 UTC m=+54.726254050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:40:50 crc kubenswrapper[5016]: E1011 07:40:50.825859 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-11 07:41:06.825853145 +0000 UTC m=+54.726309091 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.832715 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.849830 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.854777 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.854816 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.854827 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.854844 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.854857 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.864053 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.876459 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.889845 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.903269 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:50Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.956363 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.956418 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.956434 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.956456 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:50 crc kubenswrapper[5016]: I1011 07:40:50.956471 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:50Z","lastTransitionTime":"2025-10-11T07:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.059096 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.059159 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.059176 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.059200 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.059216 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.133039 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.133151 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.133174 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:51 crc kubenswrapper[5016]: E1011 07:40:51.133774 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:51 crc kubenswrapper[5016]: E1011 07:40:51.133579 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.133215 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:51 crc kubenswrapper[5016]: E1011 07:40:51.133890 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:40:51 crc kubenswrapper[5016]: E1011 07:40:51.133967 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.161962 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.161995 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.162003 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.162016 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.162024 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.264476 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.264506 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.264513 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.264526 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.264534 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.367057 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.367464 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.367591 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.367819 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.368014 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.471524 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.471599 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.471616 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.471644 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.471697 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.574804 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.574901 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.574921 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.574944 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.574961 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.635309 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:51 crc kubenswrapper[5016]: E1011 07:40:51.635536 5016 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:51 crc kubenswrapper[5016]: E1011 07:40:51.635643 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs podName:9ceaf34e-81b3-457f-8f03-d807f795392b nodeName:}" failed. No retries permitted until 2025-10-11 07:40:53.635621503 +0000 UTC m=+41.536077459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs") pod "network-metrics-daemon-459lg" (UID: "9ceaf34e-81b3-457f-8f03-d807f795392b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.678168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.678246 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.678270 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.678301 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.678326 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.780456 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.780496 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.780509 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.780525 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.780538 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.883839 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.883877 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.883892 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.883907 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.883919 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.986612 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.986686 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.986698 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.986715 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:51 crc kubenswrapper[5016]: I1011 07:40:51.986726 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:51Z","lastTransitionTime":"2025-10-11T07:40:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.089356 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.089410 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.089422 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.089441 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.089454 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:52Z","lastTransitionTime":"2025-10-11T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.192355 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.192422 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.192445 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.192469 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.192485 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:52Z","lastTransitionTime":"2025-10-11T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.295942 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.296015 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.296036 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.296066 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.296088 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:52Z","lastTransitionTime":"2025-10-11T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.398305 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.398356 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.398371 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.398413 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.398426 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:52Z","lastTransitionTime":"2025-10-11T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.501444 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.501495 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.501509 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.501525 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.501539 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:52Z","lastTransitionTime":"2025-10-11T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.603600 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.603650 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.603699 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.603720 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.603792 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:52Z","lastTransitionTime":"2025-10-11T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.706684 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.706744 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.706764 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.706791 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.706812 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:52Z","lastTransitionTime":"2025-10-11T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.810276 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.810313 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.810321 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.810335 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.810345 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:52Z","lastTransitionTime":"2025-10-11T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.913509 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.913584 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.913600 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.913627 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:52 crc kubenswrapper[5016]: I1011 07:40:52.913644 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:52Z","lastTransitionTime":"2025-10-11T07:40:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.016964 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.017014 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.017027 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.017047 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.017061 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.120360 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.120440 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.120457 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.120482 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.120502 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.132800 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.132879 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.132977 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:53 crc kubenswrapper[5016]: E1011 07:40:53.133110 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.133162 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:53 crc kubenswrapper[5016]: E1011 07:40:53.133254 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:40:53 crc kubenswrapper[5016]: E1011 07:40:53.133404 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:53 crc kubenswrapper[5016]: E1011 07:40:53.133525 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.158790 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.178203 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.211007 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.224084 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.224135 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.224154 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.224176 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.224194 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.229763 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.250392 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.272022 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.294587 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.308932 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.319311 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.327569 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.327605 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.327614 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.327627 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.327636 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.342199 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.353168 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.366119 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.377380 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.392234 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.401614 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.411574 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:53Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.429738 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.429794 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.429806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.429828 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.429841 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.532752 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.532783 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.532796 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.532815 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.532827 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.635452 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.635485 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.635494 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.635511 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.635527 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.656463 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:53 crc kubenswrapper[5016]: E1011 07:40:53.656640 5016 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:53 crc kubenswrapper[5016]: E1011 07:40:53.656719 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs podName:9ceaf34e-81b3-457f-8f03-d807f795392b nodeName:}" failed. No retries permitted until 2025-10-11 07:40:57.656702101 +0000 UTC m=+45.557158047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs") pod "network-metrics-daemon-459lg" (UID: "9ceaf34e-81b3-457f-8f03-d807f795392b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.739054 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.739099 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.739109 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.739129 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.739146 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.841344 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.841383 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.841393 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.841410 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.841420 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.945114 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.945198 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.945223 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.945254 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:53 crc kubenswrapper[5016]: I1011 07:40:53.945274 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:53Z","lastTransitionTime":"2025-10-11T07:40:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.048875 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.048952 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.048971 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.048997 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.049014 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.151752 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.151806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.151826 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.151849 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.151866 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.254518 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.254571 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.254592 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.254613 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.254626 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.357020 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.357093 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.357117 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.357145 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.357167 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.459351 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.459441 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.459457 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.459481 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.459498 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.563075 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.563129 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.563151 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.563181 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.563204 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.666560 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.666606 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.666615 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.666630 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.666639 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.770066 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.770112 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.770122 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.770141 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.770155 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.873301 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.873356 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.873369 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.873389 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.873401 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.976539 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.976611 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.976623 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.976694 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:54 crc kubenswrapper[5016]: I1011 07:40:54.976709 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:54Z","lastTransitionTime":"2025-10-11T07:40:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.079190 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.079228 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.079239 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.079254 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.079267 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:55Z","lastTransitionTime":"2025-10-11T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.132325 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.132396 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.132339 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.132396 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:55 crc kubenswrapper[5016]: E1011 07:40:55.132465 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:55 crc kubenswrapper[5016]: E1011 07:40:55.132528 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:55 crc kubenswrapper[5016]: E1011 07:40:55.132580 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:40:55 crc kubenswrapper[5016]: E1011 07:40:55.132672 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.181712 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.181753 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.181766 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.181783 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.181795 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:55Z","lastTransitionTime":"2025-10-11T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.284464 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.284497 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.284507 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.284523 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.284533 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:55Z","lastTransitionTime":"2025-10-11T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.387231 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.387323 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.387344 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.387373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.387386 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:55Z","lastTransitionTime":"2025-10-11T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.490310 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.490353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.490364 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.490379 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.490391 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:55Z","lastTransitionTime":"2025-10-11T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.592586 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.592617 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.592624 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.592636 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.592646 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:55Z","lastTransitionTime":"2025-10-11T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.695526 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.695589 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.695607 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.695631 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.695736 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:55Z","lastTransitionTime":"2025-10-11T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.798444 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.798499 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.798511 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.798532 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.798545 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:55Z","lastTransitionTime":"2025-10-11T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.901115 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.901171 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.901188 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.901210 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:55 crc kubenswrapper[5016]: I1011 07:40:55.901225 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:55Z","lastTransitionTime":"2025-10-11T07:40:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.003682 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.003734 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.003749 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.003774 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.003791 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.106571 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.106622 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.106635 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.106672 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.106684 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.209686 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.209789 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.209804 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.209843 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.209856 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.312938 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.312985 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.312999 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.313016 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.313028 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.415204 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.415462 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.415524 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.415590 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.415694 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.488905 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.488944 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.488955 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.488970 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.488981 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: E1011 07:40:56.507543 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:56Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.512273 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.512319 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.512330 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.512353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.512366 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: E1011 07:40:56.529311 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:56Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.534123 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.534173 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.534187 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.534208 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.534223 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: E1011 07:40:56.549855 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:56Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.554605 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.554746 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.554826 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.554918 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.555014 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: E1011 07:40:56.567162 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:56Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.577886 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.578023 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.578074 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.578163 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.578280 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: E1011 07:40:56.595500 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:56Z is after 2025-08-24T17:21:41Z" Oct 11 07:40:56 crc kubenswrapper[5016]: E1011 07:40:56.595861 5016 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.597815 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.597912 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.598011 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.598111 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.598196 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.700973 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.701295 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.701387 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.701471 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.701590 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.803538 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.803612 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.803680 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.803704 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.803716 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.906770 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.906845 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.906870 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.906900 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:56 crc kubenswrapper[5016]: I1011 07:40:56.906922 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:56Z","lastTransitionTime":"2025-10-11T07:40:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.010407 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.010444 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.010455 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.010468 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.010482 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.112629 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.112697 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.112707 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.112722 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.112731 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.133239 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.133273 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.133338 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:57 crc kubenswrapper[5016]: E1011 07:40:57.133437 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.133509 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:57 crc kubenswrapper[5016]: E1011 07:40:57.133526 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:57 crc kubenswrapper[5016]: E1011 07:40:57.133766 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:57 crc kubenswrapper[5016]: E1011 07:40:57.133940 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.215364 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.215405 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.215414 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.215429 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.215441 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.318264 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.318327 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.318340 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.318357 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.318406 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.420682 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.420709 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.420718 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.420735 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.420746 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.523546 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.523588 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.523598 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.523611 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.523621 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.627044 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.627138 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.627535 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.627560 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.627577 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.700144 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:57 crc kubenswrapper[5016]: E1011 07:40:57.700363 5016 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:57 crc kubenswrapper[5016]: E1011 07:40:57.700485 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs podName:9ceaf34e-81b3-457f-8f03-d807f795392b nodeName:}" failed. No retries permitted until 2025-10-11 07:41:05.700449872 +0000 UTC m=+53.600905858 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs") pod "network-metrics-daemon-459lg" (UID: "9ceaf34e-81b3-457f-8f03-d807f795392b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.731647 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.731749 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.731770 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.731813 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.731835 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.835108 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.835153 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.835161 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.835176 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.835185 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.938350 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.938398 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.938407 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.938421 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:57 crc kubenswrapper[5016]: I1011 07:40:57.938437 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:57Z","lastTransitionTime":"2025-10-11T07:40:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.041669 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.041711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.041720 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.041735 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.041743 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.144703 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.144756 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.144768 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.144786 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.144797 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.247552 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.247593 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.247604 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.247620 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.247631 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.350805 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.350838 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.350846 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.350859 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.350867 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.454154 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.454193 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.454207 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.454224 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.454236 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.557472 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.557520 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.557537 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.557557 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.557570 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.660006 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.660067 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.660106 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.660137 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.664399 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.768857 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.768928 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.768945 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.768968 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.768985 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.872275 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.872343 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.872360 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.872383 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.872399 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.975469 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.975525 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.975537 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.975555 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:58 crc kubenswrapper[5016]: I1011 07:40:58.975568 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:58Z","lastTransitionTime":"2025-10-11T07:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.077894 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.077952 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.077968 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.077991 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.078004 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:59Z","lastTransitionTime":"2025-10-11T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.133277 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.133340 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.133286 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.133420 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:40:59 crc kubenswrapper[5016]: E1011 07:40:59.133578 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:40:59 crc kubenswrapper[5016]: E1011 07:40:59.133925 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:40:59 crc kubenswrapper[5016]: E1011 07:40:59.133801 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:40:59 crc kubenswrapper[5016]: E1011 07:40:59.134061 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.180615 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.180711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.180733 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.180757 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.180775 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:59Z","lastTransitionTime":"2025-10-11T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.283262 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.283310 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.283325 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.283344 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.283358 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:59Z","lastTransitionTime":"2025-10-11T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.385763 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.385873 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.385895 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.385915 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.385929 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:59Z","lastTransitionTime":"2025-10-11T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.489203 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.489261 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.489279 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.489306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.489325 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:59Z","lastTransitionTime":"2025-10-11T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.592630 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.592739 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.592758 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.592784 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.592802 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:59Z","lastTransitionTime":"2025-10-11T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.696232 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.696271 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.696280 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.696302 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.696314 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:59Z","lastTransitionTime":"2025-10-11T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.799342 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.799403 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.799421 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.799445 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.799465 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:59Z","lastTransitionTime":"2025-10-11T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.902856 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.902921 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.902945 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.902972 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:40:59 crc kubenswrapper[5016]: I1011 07:40:59.902992 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:40:59Z","lastTransitionTime":"2025-10-11T07:40:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.006298 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.006371 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.006412 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.006445 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.006468 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.110118 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.110182 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.110204 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.110232 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.110252 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.213593 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.213633 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.213709 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.213728 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.213740 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.317536 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.317609 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.317629 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.317680 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.317699 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.421139 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.421188 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.421203 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.421231 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.421245 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.523058 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.523114 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.523132 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.523155 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.523171 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.626837 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.626877 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.626888 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.626902 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.626911 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.731482 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.731522 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.731535 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.731554 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.731571 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.835040 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.835086 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.835105 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.835175 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.835191 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.938210 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.938306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.938334 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.938366 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:00 crc kubenswrapper[5016]: I1011 07:41:00.938390 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:00Z","lastTransitionTime":"2025-10-11T07:41:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.040313 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.040350 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.040360 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.040376 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.040386 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.132883 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.132952 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.132997 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:01 crc kubenswrapper[5016]: E1011 07:41:01.133061 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:01 crc kubenswrapper[5016]: E1011 07:41:01.133287 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:01 crc kubenswrapper[5016]: E1011 07:41:01.133444 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.133540 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:01 crc kubenswrapper[5016]: E1011 07:41:01.133693 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.142521 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.142570 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.142578 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.142592 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.142602 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.245426 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.245778 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.245981 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.246243 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.246726 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.285630 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.301506 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.318289 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.329755 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.338277 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.349238 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.349306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.349328 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.349348 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.349361 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.351632 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.367165 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.383418 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.398925 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.415611 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.430512 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.441896 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.451462 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.451506 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.451543 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.451561 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.451573 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.454099 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.468574 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.481125 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.491886 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.505491 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:01Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.554379 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.554438 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.554447 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.554465 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.554477 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.657848 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.657889 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.657899 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.657912 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.657920 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.760214 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.760255 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.760267 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.760283 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.760294 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.863258 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.863311 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.863328 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.863350 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.863369 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.965622 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.965684 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.965692 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.965704 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:01 crc kubenswrapper[5016]: I1011 07:41:01.965713 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:01Z","lastTransitionTime":"2025-10-11T07:41:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.068043 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.068098 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.068116 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.068138 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.068154 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.174085 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.174125 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.174136 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.174153 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.174163 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.277776 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.277859 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.277884 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.277918 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.277942 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.380716 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.380755 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.380763 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.380778 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.380799 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.484384 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.484424 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.484434 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.484451 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.484463 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.586623 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.586682 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.586690 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.586704 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.586714 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.688891 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.688926 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.688936 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.688950 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.688961 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.791256 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.791311 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.791339 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.791413 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.791437 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.894316 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.894348 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.894356 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.894367 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.894375 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.933836 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.945010 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.951543 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:02Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.967825 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:02Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.983824 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:02Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.996505 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.996542 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.996552 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.996565 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.996575 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:02Z","lastTransitionTime":"2025-10-11T07:41:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:02 crc kubenswrapper[5016]: I1011 07:41:02.998298 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:02Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.008637 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.022301 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.037959 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.049672 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.062200 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.072530 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.084511 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.099048 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.099094 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.099106 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.099126 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.099138 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:03Z","lastTransitionTime":"2025-10-11T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.101446 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.119360 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.132755 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.132763 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.132803 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.132907 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:03 crc kubenswrapper[5016]: E1011 07:41:03.132984 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:03 crc kubenswrapper[5016]: E1011 07:41:03.133086 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:03 crc kubenswrapper[5016]: E1011 07:41:03.133191 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:03 crc kubenswrapper[5016]: E1011 07:41:03.133285 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.134121 5016 scope.go:117] "RemoveContainer" containerID="73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.134478 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.156685 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.168616 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.186294 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.200155 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.202724 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.202757 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.202765 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.202807 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.202817 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:03Z","lastTransitionTime":"2025-10-11T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.216803 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.231536 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.241673 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.255260 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.270345 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.284781 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.297833 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.305259 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.305300 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.305312 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.305331 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.305342 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:03Z","lastTransitionTime":"2025-10-11T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.316204 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.329315 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.345857 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.364326 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.382186 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.394995 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.405717 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.408378 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.408407 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.408416 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.408432 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.408442 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:03Z","lastTransitionTime":"2025-10-11T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.419507 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.480126 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/1.log" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.483518 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546"} Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.483951 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.503232 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.510729 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.510793 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.510806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.510829 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.510845 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:03Z","lastTransitionTime":"2025-10-11T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.514416 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.527351 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.545645 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.560306 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.577907 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.596022 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.608822 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.613388 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.613426 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.613437 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.613452 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.613480 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:03Z","lastTransitionTime":"2025-10-11T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.624983 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.638359 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.656863 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.666391 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.678804 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.693309 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.706029 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.715496 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.715535 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.715546 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.715563 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.715576 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:03Z","lastTransitionTime":"2025-10-11T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.725164 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.744019 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:03Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.818346 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.818412 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.818433 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.818453 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.818466 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:03Z","lastTransitionTime":"2025-10-11T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.921579 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.921663 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.921673 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.921689 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:03 crc kubenswrapper[5016]: I1011 07:41:03.921699 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:03Z","lastTransitionTime":"2025-10-11T07:41:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.024637 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.024725 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.024740 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.024761 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.024776 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.128240 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.128287 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.128298 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.128316 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.128327 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.231851 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.231884 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.231892 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.231906 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.231914 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.333757 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.333999 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.334065 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.334129 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.334197 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.437247 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.437289 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.437298 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.437312 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.437320 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.490342 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/2.log" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.491499 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/1.log" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.495107 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546" exitCode=1 Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.495155 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.495226 5016 scope.go:117] "RemoveContainer" containerID="73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.496854 5016 scope.go:117] "RemoveContainer" containerID="93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546" Oct 11 07:41:04 crc kubenswrapper[5016]: E1011 07:41:04.497284 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.518142 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.537070 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.539775 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.539849 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.539865 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.539892 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.539907 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.555394 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.569141 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.583108 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.596687 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.609640 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.626186 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.640477 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.642293 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.642339 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.642353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.642373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.642385 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.662857 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73c08d4ccd3f5cc499fa38dd5bb50072726165ec3944e64afb5aa76b26125fc3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"message\\\":\\\"icate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:40:48Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:40:48.176169 6488 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.4 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {43933d5e-3c3b-4ff8-8926-04ac25de450e}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1011 07:40:48.175926 6488 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-lbbb2 in node crc\\\\nI1011 07:40:48.176223 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-lbbb2 after 0 failed attempt(s)\\\\nI1011 07:40:48.176247 6488 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-lbbb2\\\\nI1011 07:40:48.176011 6488 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1011 07:40:48.176259 6488 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1011 07:40:48.175842 6488 services_controller.go:451] Built service openshift-machine-config-operator/machine-config-operator cluster-wide LB for network=d\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:04Z\\\",\\\"message\\\":\\\"er\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1011 07:41:03.972761 6716 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI1011 07:41:03.972772 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972783 6716 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972786 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1011 07:41:03.972764 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.674173 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.685833 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.697494 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.707685 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.720395 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.732298 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.741932 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:04Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.744630 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.744677 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.744689 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.744704 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.744714 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.846463 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.846494 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.846502 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.846515 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.846525 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.948741 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.948772 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.948780 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.948792 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:04 crc kubenswrapper[5016]: I1011 07:41:04.948801 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:04Z","lastTransitionTime":"2025-10-11T07:41:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.051286 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.051338 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.051348 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.051364 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.051375 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.133199 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.133248 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.133244 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.133226 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:05 crc kubenswrapper[5016]: E1011 07:41:05.133435 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:05 crc kubenswrapper[5016]: E1011 07:41:05.133502 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:05 crc kubenswrapper[5016]: E1011 07:41:05.133645 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:05 crc kubenswrapper[5016]: E1011 07:41:05.133749 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.153461 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.153520 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.153532 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.153549 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.153560 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.256843 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.256904 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.256921 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.256946 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.256964 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.361171 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.361241 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.361254 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.361282 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.361297 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.465389 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.465480 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.465509 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.465546 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.465574 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.501119 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/2.log" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.506339 5016 scope.go:117] "RemoveContainer" containerID="93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546" Oct 11 07:41:05 crc kubenswrapper[5016]: E1011 07:41:05.506605 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.537909 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:04Z\\\",\\\"message\\\":\\\"er\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1011 07:41:03.972761 6716 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI1011 07:41:03.972772 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972783 6716 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972786 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1011 07:41:03.972764 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.552381 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.567749 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.569253 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.569317 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.569342 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.569373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.569398 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.589319 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.611457 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.636379 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.660545 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.672812 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.672888 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.672899 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.672922 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.672938 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.680689 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.700520 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.719842 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.737204 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.752191 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.770620 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.775630 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.775691 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.775704 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.775719 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.775731 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.785914 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.786457 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:05 crc kubenswrapper[5016]: E1011 07:41:05.786607 5016 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:41:05 crc kubenswrapper[5016]: E1011 07:41:05.786701 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs podName:9ceaf34e-81b3-457f-8f03-d807f795392b nodeName:}" failed. No retries permitted until 2025-10-11 07:41:21.786678242 +0000 UTC m=+69.687134188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs") pod "network-metrics-daemon-459lg" (UID: "9ceaf34e-81b3-457f-8f03-d807f795392b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.801820 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.815085 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.829453 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:05Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.877846 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.877881 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.877893 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.877915 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.877930 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.981322 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.981414 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.981433 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.981464 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:05 crc kubenswrapper[5016]: I1011 07:41:05.981486 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:05Z","lastTransitionTime":"2025-10-11T07:41:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.087465 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.087516 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.087529 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.087550 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.087564 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.190251 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.190320 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.190523 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.190550 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.190569 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.293344 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.293385 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.293394 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.293409 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.293421 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.395421 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.395486 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.395497 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.395515 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.395527 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.497714 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.497783 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.497800 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.497825 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.497848 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.599938 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.599988 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.600005 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.600023 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.600035 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.703084 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.703122 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.703131 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.703147 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.703156 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.805977 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.806033 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.806044 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.806061 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.806074 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.898970 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899119 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:41:38.899100777 +0000 UTC m=+86.799556723 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.899151 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.899175 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.899197 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.899216 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899256 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899269 5016 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899272 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899286 5016 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899297 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:41:38.899290972 +0000 UTC m=+86.799746918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899320 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-11 07:41:38.899310843 +0000 UTC m=+86.799766789 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899336 5016 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899425 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:41:38.899404045 +0000 UTC m=+86.799860001 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899461 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899509 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899532 5016 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.899625 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-11 07:41:38.89959702 +0000 UTC m=+86.800053006 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.908335 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.908528 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.908727 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.908855 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.908988 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.933853 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.933885 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.933905 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.933920 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.933932 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.949344 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:06Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.953929 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.953986 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.954007 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.954032 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.954050 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.970204 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:06Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.974685 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.974734 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.974745 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.974765 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.974777 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:06 crc kubenswrapper[5016]: E1011 07:41:06.992754 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:06Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.996904 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.997119 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.997259 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.997401 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:06 crc kubenswrapper[5016]: I1011 07:41:06.997550 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:06Z","lastTransitionTime":"2025-10-11T07:41:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: E1011 07:41:07.011280 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:07Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.015312 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.015363 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.015379 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.015406 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.015424 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: E1011 07:41:07.029608 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:07Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:07 crc kubenswrapper[5016]: E1011 07:41:07.029758 5016 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.031556 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.031590 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.031604 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.031619 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.031629 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.132307 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.132379 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.132312 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:07 crc kubenswrapper[5016]: E1011 07:41:07.132466 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:07 crc kubenswrapper[5016]: E1011 07:41:07.132552 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:07 crc kubenswrapper[5016]: E1011 07:41:07.132714 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.132830 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:07 crc kubenswrapper[5016]: E1011 07:41:07.132998 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.133537 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.133584 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.133601 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.133626 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.133643 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.236640 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.236735 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.236752 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.237099 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.237126 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.340896 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.341137 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.341199 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.341269 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.341373 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.444788 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.444854 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.444881 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.444914 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.444938 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.548568 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.548639 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.548751 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.548791 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.548814 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.651853 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.651935 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.652031 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.652422 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.652484 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.755946 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.756024 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.756052 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.756084 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.756107 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.858853 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.858884 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.858892 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.858907 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.858915 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.961463 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.961696 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.961767 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.961867 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:07 crc kubenswrapper[5016]: I1011 07:41:07.961931 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:07Z","lastTransitionTime":"2025-10-11T07:41:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.064751 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.064785 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.064795 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.064809 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.064818 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.167016 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.167105 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.167117 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.167136 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.167148 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.270250 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.270311 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.270328 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.270353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.270369 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.373442 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.373519 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.373537 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.374068 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.374144 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.478208 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.478267 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.478288 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.478311 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.478329 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.580852 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.580917 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.580935 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.580959 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.580976 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.684078 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.684201 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.684223 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.684246 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.684262 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.788050 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.788108 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.788125 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.788148 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.788164 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.890461 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.890528 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.890546 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.890576 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.890619 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.993965 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.994006 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.994013 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.994030 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:08 crc kubenswrapper[5016]: I1011 07:41:08.994039 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:08Z","lastTransitionTime":"2025-10-11T07:41:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.097097 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.097142 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.097150 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.097166 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.097178 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:09Z","lastTransitionTime":"2025-10-11T07:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.133198 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.133197 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.133267 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.133354 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:09 crc kubenswrapper[5016]: E1011 07:41:09.133464 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:09 crc kubenswrapper[5016]: E1011 07:41:09.133645 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:09 crc kubenswrapper[5016]: E1011 07:41:09.133842 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:09 crc kubenswrapper[5016]: E1011 07:41:09.133958 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.200222 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.200540 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.200555 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.200574 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.200588 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:09Z","lastTransitionTime":"2025-10-11T07:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.303294 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.303346 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.303362 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.303384 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.303402 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:09Z","lastTransitionTime":"2025-10-11T07:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.405407 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.405494 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.405514 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.405543 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.405564 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:09Z","lastTransitionTime":"2025-10-11T07:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.509268 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.509317 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.509329 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.509358 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.509370 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:09Z","lastTransitionTime":"2025-10-11T07:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.611806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.611848 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.611858 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.611872 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.611881 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:09Z","lastTransitionTime":"2025-10-11T07:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.714201 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.714253 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.714265 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.714280 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.714290 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:09Z","lastTransitionTime":"2025-10-11T07:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.817254 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.817331 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.817353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.817383 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.817405 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:09Z","lastTransitionTime":"2025-10-11T07:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.919547 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.919609 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.919628 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.919676 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:09 crc kubenswrapper[5016]: I1011 07:41:09.919692 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:09Z","lastTransitionTime":"2025-10-11T07:41:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.022932 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.022987 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.023014 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.023062 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.023079 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.126527 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.126627 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.126693 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.126720 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.126737 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.230220 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.230271 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.230284 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.230300 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.230318 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.333261 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.333334 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.333352 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.333377 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.333395 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.436807 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.436864 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.436882 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.436905 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.436923 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.539247 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.539283 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.539291 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.539304 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.539313 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.641745 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.641828 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.641862 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.641891 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.641912 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.745009 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.745066 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.745083 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.745104 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.745121 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.847178 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.847232 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.847244 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.847259 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.847272 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.949695 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.949737 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.949748 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.949763 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:10 crc kubenswrapper[5016]: I1011 07:41:10.949775 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:10Z","lastTransitionTime":"2025-10-11T07:41:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.053383 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.053445 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.053459 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.053476 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.053488 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.133244 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.133310 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:11 crc kubenswrapper[5016]: E1011 07:41:11.133364 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.133396 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.133407 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:11 crc kubenswrapper[5016]: E1011 07:41:11.133623 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:11 crc kubenswrapper[5016]: E1011 07:41:11.133694 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:11 crc kubenswrapper[5016]: E1011 07:41:11.133746 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.156299 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.156347 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.156359 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.156376 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.156388 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.259260 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.259321 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.259338 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.259362 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.259378 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.361523 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.361583 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.361607 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.361629 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.361643 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.464846 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.464938 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.464958 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.465001 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.465023 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.566600 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.566632 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.566643 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.566672 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.566684 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.669314 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.669351 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.669360 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.669372 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.669382 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.772725 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.772776 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.772786 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.772803 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.772815 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.877353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.877408 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.877427 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.877454 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.877472 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.980690 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.980761 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.980781 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.980809 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:11 crc kubenswrapper[5016]: I1011 07:41:11.980828 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:11Z","lastTransitionTime":"2025-10-11T07:41:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.084057 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.084091 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.084100 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.084112 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.084121 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:12Z","lastTransitionTime":"2025-10-11T07:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.186628 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.186711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.186729 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.186752 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.186769 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:12Z","lastTransitionTime":"2025-10-11T07:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.289112 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.289159 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.289170 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.289191 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.289206 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:12Z","lastTransitionTime":"2025-10-11T07:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.392633 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.392690 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.392705 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.392721 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.392731 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:12Z","lastTransitionTime":"2025-10-11T07:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.495949 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.495994 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.496005 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.496020 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.496031 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:12Z","lastTransitionTime":"2025-10-11T07:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.598189 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.598238 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.598250 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.598268 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.598280 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:12Z","lastTransitionTime":"2025-10-11T07:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.700382 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.700427 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.700440 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.700460 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.700475 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:12Z","lastTransitionTime":"2025-10-11T07:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.802857 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.802899 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.802909 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.802923 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.802935 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:12Z","lastTransitionTime":"2025-10-11T07:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.905461 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.905522 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.905533 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.905551 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:12 crc kubenswrapper[5016]: I1011 07:41:12.905562 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:12Z","lastTransitionTime":"2025-10-11T07:41:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.007805 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.007871 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.007881 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.007896 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.007904 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.110919 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.110980 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.110999 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.111027 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.111049 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.132588 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.132692 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.132700 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:13 crc kubenswrapper[5016]: E1011 07:41:13.132828 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.132878 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:13 crc kubenswrapper[5016]: E1011 07:41:13.133340 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:13 crc kubenswrapper[5016]: E1011 07:41:13.133539 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:13 crc kubenswrapper[5016]: E1011 07:41:13.133895 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.156089 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.177223 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.192968 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.206547 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.214233 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.214288 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.214302 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.214318 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.214329 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.219909 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.235482 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.256534 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.280983 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.297085 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.315139 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:04Z\\\",\\\"message\\\":\\\"er\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1011 07:41:03.972761 6716 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI1011 07:41:03.972772 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972783 6716 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972786 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1011 07:41:03.972764 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.317794 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.317873 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.317893 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.317925 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.317944 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.331872 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.346892 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.368966 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.384705 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.408010 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.422988 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.423056 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.423257 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.423272 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.423293 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.423307 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.443487 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:13Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.527008 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.527046 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.527054 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.527068 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.527078 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.629285 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.629327 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.629337 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.629351 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.629361 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.731925 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.731989 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.731999 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.732013 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.732023 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.835219 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.835338 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.835359 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.835381 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.835395 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.938737 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.938824 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.938872 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.938897 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:13 crc kubenswrapper[5016]: I1011 07:41:13.938911 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:13Z","lastTransitionTime":"2025-10-11T07:41:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.041808 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.041883 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.041903 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.042427 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.042490 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.144899 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.144931 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.144939 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.144950 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.144958 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.246511 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.246548 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.246558 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.246571 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.246579 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.349992 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.350097 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.350117 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.350142 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.350160 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.453441 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.453481 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.453490 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.453504 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.453513 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.556338 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.556399 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.556414 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.556432 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.556445 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.659909 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.659973 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.659990 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.660012 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.660029 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.763382 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.763966 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.763998 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.764025 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.764042 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.867035 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.867082 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.867107 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.867126 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.867138 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.970458 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.970512 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.970527 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.970549 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:14 crc kubenswrapper[5016]: I1011 07:41:14.970566 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:14Z","lastTransitionTime":"2025-10-11T07:41:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.073173 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.073222 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.073232 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.073250 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.073261 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:15Z","lastTransitionTime":"2025-10-11T07:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.133213 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:15 crc kubenswrapper[5016]: E1011 07:41:15.133386 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.133612 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:15 crc kubenswrapper[5016]: E1011 07:41:15.133734 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.133890 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.134026 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:15 crc kubenswrapper[5016]: E1011 07:41:15.134172 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:15 crc kubenswrapper[5016]: E1011 07:41:15.134420 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.176063 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.176113 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.176126 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.176144 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.176159 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:15Z","lastTransitionTime":"2025-10-11T07:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.279682 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.279731 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.279742 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.279759 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.279770 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:15Z","lastTransitionTime":"2025-10-11T07:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.382127 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.382168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.382180 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.382200 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.382213 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:15Z","lastTransitionTime":"2025-10-11T07:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.485294 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.485373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.485391 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.485416 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.485433 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:15Z","lastTransitionTime":"2025-10-11T07:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.587813 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.587871 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.587888 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.587921 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.587937 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:15Z","lastTransitionTime":"2025-10-11T07:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.690368 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.690404 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.690414 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.690429 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.690440 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:15Z","lastTransitionTime":"2025-10-11T07:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.793061 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.793145 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.793168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.793188 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.793238 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:15Z","lastTransitionTime":"2025-10-11T07:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.897338 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.897389 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.897414 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.897436 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:15 crc kubenswrapper[5016]: I1011 07:41:15.897451 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:15Z","lastTransitionTime":"2025-10-11T07:41:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.000256 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.000337 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.000363 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.000393 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.000415 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.102985 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.103052 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.103076 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.103108 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.103131 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.133376 5016 scope.go:117] "RemoveContainer" containerID="93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546" Oct 11 07:41:16 crc kubenswrapper[5016]: E1011 07:41:16.133719 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.206233 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.206299 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.206317 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.206341 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.206359 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.308988 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.309035 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.309049 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.309065 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.309077 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.411097 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.411160 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.411179 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.411202 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.411220 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.513928 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.513999 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.514024 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.514056 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.514082 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.617060 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.617109 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.617124 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.617148 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.617168 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.720356 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.720423 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.720445 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.720473 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.720495 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.823321 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.823364 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.823376 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.823391 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.823402 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.926114 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.926488 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.926502 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.926518 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:16 crc kubenswrapper[5016]: I1011 07:41:16.926531 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:16Z","lastTransitionTime":"2025-10-11T07:41:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.029474 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.029510 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.029521 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.029537 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.029549 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.070510 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.070557 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.070570 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.070592 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.070609 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.089726 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:17Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.093840 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.093890 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.093903 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.093921 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.093932 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.108779 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:17Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.112225 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.112252 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.112264 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.112280 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.112292 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.128419 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:17Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.132323 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.132527 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.132986 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.133121 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.133309 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.133339 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.133353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.133376 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.133393 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.134936 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.135049 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.135095 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.135256 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.153597 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:17Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.158311 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.158385 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.158408 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.158437 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.158460 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.174600 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:17Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:17 crc kubenswrapper[5016]: E1011 07:41:17.174867 5016 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.177150 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.177191 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.177206 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.177225 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.177238 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.279573 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.279628 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.279640 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.279678 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.279693 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.382544 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.382594 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.382685 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.382707 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.382721 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.485588 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.485621 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.485629 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.485642 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.485662 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.588103 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.588142 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.588155 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.588171 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.588182 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.691569 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.691606 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.691617 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.691635 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.691645 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.794850 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.794896 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.794908 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.794925 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.794937 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.897558 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.897588 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.897601 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.897616 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.897628 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.999573 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.999607 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.999614 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.999628 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:17 crc kubenswrapper[5016]: I1011 07:41:17.999639 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:17Z","lastTransitionTime":"2025-10-11T07:41:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.101566 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.101594 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.101603 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.101616 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.101625 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:18Z","lastTransitionTime":"2025-10-11T07:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.203486 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.203519 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.203528 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.203541 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.203553 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:18Z","lastTransitionTime":"2025-10-11T07:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.306213 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.306244 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.306254 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.306271 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.306283 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:18Z","lastTransitionTime":"2025-10-11T07:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.408828 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.408867 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.408877 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.408893 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.408903 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:18Z","lastTransitionTime":"2025-10-11T07:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.511018 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.511054 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.511063 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.511080 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.511089 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:18Z","lastTransitionTime":"2025-10-11T07:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.613965 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.614000 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.614011 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.614025 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.614034 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:18Z","lastTransitionTime":"2025-10-11T07:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.716302 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.716343 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.716351 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.716364 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.716372 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:18Z","lastTransitionTime":"2025-10-11T07:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.819509 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.819554 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.819565 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.819583 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.819595 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:18Z","lastTransitionTime":"2025-10-11T07:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.922694 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.922734 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.922745 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.922762 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:18 crc kubenswrapper[5016]: I1011 07:41:18.922774 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:18Z","lastTransitionTime":"2025-10-11T07:41:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.025418 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.025471 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.025490 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.025510 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.025519 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.127206 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.127236 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.127246 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.127262 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.127272 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.132919 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.132957 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.132997 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:19 crc kubenswrapper[5016]: E1011 07:41:19.133028 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.133075 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:19 crc kubenswrapper[5016]: E1011 07:41:19.133192 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:19 crc kubenswrapper[5016]: E1011 07:41:19.133246 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:19 crc kubenswrapper[5016]: E1011 07:41:19.133341 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.230206 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.230258 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.230273 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.230296 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.230315 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.333096 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.333149 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.333160 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.333179 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.333208 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.440228 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.440274 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.440285 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.440303 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.440314 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.543060 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.543120 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.543132 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.543148 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.543205 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.646091 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.646176 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.646192 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.646211 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.646816 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.749998 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.750099 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.750127 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.750158 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.750182 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.852927 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.853042 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.853066 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.853100 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.853124 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.955570 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.955616 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.955630 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.955650 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:19 crc kubenswrapper[5016]: I1011 07:41:19.955693 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:19Z","lastTransitionTime":"2025-10-11T07:41:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.058457 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.058511 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.058522 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.058539 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.058574 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.160582 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.160628 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.160641 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.160677 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.160690 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.262806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.262861 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.262876 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.262896 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.262910 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.365245 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.365293 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.365302 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.365320 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.365330 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.467908 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.467952 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.467963 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.468000 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.468012 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.570582 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.570636 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.570646 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.570674 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.570685 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.673213 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.673270 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.673288 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.673310 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.673328 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.776162 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.776228 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.776249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.776276 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.776296 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.878968 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.879039 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.879059 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.879084 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.879101 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.981608 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.982202 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.982310 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.982435 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:20 crc kubenswrapper[5016]: I1011 07:41:20.982522 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:20Z","lastTransitionTime":"2025-10-11T07:41:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.084543 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.084599 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.084619 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.084640 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.084677 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:21Z","lastTransitionTime":"2025-10-11T07:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.133208 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.133251 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.133263 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:21 crc kubenswrapper[5016]: E1011 07:41:21.133331 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.133170 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:21 crc kubenswrapper[5016]: E1011 07:41:21.133460 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:21 crc kubenswrapper[5016]: E1011 07:41:21.133573 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:21 crc kubenswrapper[5016]: E1011 07:41:21.133641 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.186672 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.186721 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.186731 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.186748 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.186759 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:21Z","lastTransitionTime":"2025-10-11T07:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.288712 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.288748 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.288756 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.288769 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.288778 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:21Z","lastTransitionTime":"2025-10-11T07:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.391168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.391204 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.391213 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.391225 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.391233 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:21Z","lastTransitionTime":"2025-10-11T07:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.494895 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.494931 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.494942 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.494959 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.494970 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:21Z","lastTransitionTime":"2025-10-11T07:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.597609 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.597780 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.597805 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.597834 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.597859 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:21Z","lastTransitionTime":"2025-10-11T07:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.700466 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.700535 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.700553 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.700581 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.700597 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:21Z","lastTransitionTime":"2025-10-11T07:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.803538 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.803579 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.803589 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.803605 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.803616 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:21Z","lastTransitionTime":"2025-10-11T07:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.883633 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:21 crc kubenswrapper[5016]: E1011 07:41:21.883831 5016 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:41:21 crc kubenswrapper[5016]: E1011 07:41:21.883954 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs podName:9ceaf34e-81b3-457f-8f03-d807f795392b nodeName:}" failed. No retries permitted until 2025-10-11 07:41:53.883923864 +0000 UTC m=+101.784379850 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs") pod "network-metrics-daemon-459lg" (UID: "9ceaf34e-81b3-457f-8f03-d807f795392b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.906483 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.906511 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.906519 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.906531 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:21 crc kubenswrapper[5016]: I1011 07:41:21.906540 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:21Z","lastTransitionTime":"2025-10-11T07:41:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.008739 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.008785 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.008797 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.008813 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.008825 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.112322 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.112612 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.112726 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.112772 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.112791 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.215735 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.215817 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.215842 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.215873 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.215896 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.318889 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.318928 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.318954 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.318972 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.318981 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.422430 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.422471 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.422483 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.422501 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.422513 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.524993 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.525063 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.525075 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.525094 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.525105 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.628150 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.628227 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.628254 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.628283 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.628306 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.730274 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.730313 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.730356 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.730376 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.730390 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.833026 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.833070 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.833083 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.833100 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.833113 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.936638 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.936696 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.936711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.936730 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:22 crc kubenswrapper[5016]: I1011 07:41:22.936746 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:22Z","lastTransitionTime":"2025-10-11T07:41:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.039134 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.039168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.039181 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.039197 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.039210 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.132491 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.132623 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:23 crc kubenswrapper[5016]: E1011 07:41:23.132699 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.132739 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.132510 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:23 crc kubenswrapper[5016]: E1011 07:41:23.132889 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:23 crc kubenswrapper[5016]: E1011 07:41:23.133014 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:23 crc kubenswrapper[5016]: E1011 07:41:23.133144 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.141798 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.141946 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.141974 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.141997 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.142013 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.143204 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.154803 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.167943 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.178389 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.196459 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:04Z\\\",\\\"message\\\":\\\"er\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1011 07:41:03.972761 6716 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI1011 07:41:03.972772 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972783 6716 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972786 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1011 07:41:03.972764 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.208596 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.219252 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.232594 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.244880 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.244917 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.244930 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.244948 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.244963 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.246423 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.258747 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.277329 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.288315 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.297721 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.309756 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.324340 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.338816 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.347793 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.347828 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.347838 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.347853 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.347864 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.349408 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:23Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.450578 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.450637 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.450646 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.450687 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.450698 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.552639 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.552698 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.552711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.552727 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.552738 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.654781 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.654850 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.654873 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.654904 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.654921 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.756966 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.757025 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.757035 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.757047 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.757058 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.859731 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.859780 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.859791 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.859809 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.859819 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.962186 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.962247 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.962267 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.962291 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:23 crc kubenswrapper[5016]: I1011 07:41:23.962309 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:23Z","lastTransitionTime":"2025-10-11T07:41:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.064119 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.064164 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.064175 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.064193 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.064205 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.166326 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.166393 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.166411 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.166437 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.166459 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.268477 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.268522 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.268533 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.268548 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.268557 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.371623 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.371693 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.371705 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.371727 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.371739 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.474093 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.474137 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.474148 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.474164 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.474175 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.576530 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.576572 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.576591 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.576609 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.576620 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.678980 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.679016 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.679027 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.679041 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.679052 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.781466 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.781508 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.781521 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.781536 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.781547 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.883422 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.883458 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.883468 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.883493 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.883508 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.986065 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.986110 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.986123 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.986141 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:24 crc kubenswrapper[5016]: I1011 07:41:24.986153 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:24Z","lastTransitionTime":"2025-10-11T07:41:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.089701 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.089772 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.089802 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.089825 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.089837 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:25Z","lastTransitionTime":"2025-10-11T07:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.132829 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.132895 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.132949 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.133114 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:25 crc kubenswrapper[5016]: E1011 07:41:25.133106 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:25 crc kubenswrapper[5016]: E1011 07:41:25.133245 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:25 crc kubenswrapper[5016]: E1011 07:41:25.133339 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:25 crc kubenswrapper[5016]: E1011 07:41:25.133400 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.192598 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.192633 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.192641 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.192673 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.192691 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:25Z","lastTransitionTime":"2025-10-11T07:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.294951 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.294991 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.295013 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.295028 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.295038 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:25Z","lastTransitionTime":"2025-10-11T07:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.397265 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.397307 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.397317 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.397334 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.397347 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:25Z","lastTransitionTime":"2025-10-11T07:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.499746 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.499778 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.499787 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.499799 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.499811 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:25Z","lastTransitionTime":"2025-10-11T07:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.603444 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.603483 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.603494 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.603511 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.603522 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:25Z","lastTransitionTime":"2025-10-11T07:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.705781 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.705812 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.705826 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.705840 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.705850 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:25Z","lastTransitionTime":"2025-10-11T07:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.808761 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.808828 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.808848 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.808873 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.808889 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:25Z","lastTransitionTime":"2025-10-11T07:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.911740 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.912181 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.912333 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.912486 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:25 crc kubenswrapper[5016]: I1011 07:41:25.912640 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:25Z","lastTransitionTime":"2025-10-11T07:41:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.015038 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.015075 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.015086 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.015101 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.015138 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.117695 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.117734 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.117745 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.117762 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.117772 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.220068 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.220113 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.220124 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.220141 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.220154 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.322527 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.322563 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.322572 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.322585 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.322593 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.425617 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.425677 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.425690 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.425709 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.425723 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.528425 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.528476 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.528487 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.528502 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.528513 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.569951 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/0.log" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.570002 5016 generic.go:334] "Generic (PLEG): container finished" podID="48e55d9a-f690-40ae-ba16-e91c4d9d3a95" containerID="39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428" exitCode=1 Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.570033 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lbbb2" event={"ID":"48e55d9a-f690-40ae-ba16-e91c4d9d3a95","Type":"ContainerDied","Data":"39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.570422 5016 scope.go:117] "RemoveContainer" containerID="39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.590290 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.608629 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.625730 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.631175 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.631251 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.631278 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.631310 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.631333 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.643527 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:25Z\\\",\\\"message\\\":\\\"2025-10-11T07:40:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a\\\\n2025-10-11T07:40:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a to /host/opt/cni/bin/\\\\n2025-10-11T07:40:40Z [verbose] multus-daemon started\\\\n2025-10-11T07:40:40Z [verbose] Readiness Indicator file check\\\\n2025-10-11T07:41:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.656738 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.672164 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.684312 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.696918 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.706912 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.722349 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:04Z\\\",\\\"message\\\":\\\"er\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1011 07:41:03.972761 6716 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI1011 07:41:03.972772 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972783 6716 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972786 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1011 07:41:03.972764 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.732106 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.734245 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.734299 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.734318 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.734345 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.734365 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.743270 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.756982 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.778540 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.797413 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.809331 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.823841 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:26Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.836764 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.836808 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.836816 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.836829 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.836838 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.939456 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.939499 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.939508 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.939520 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:26 crc kubenswrapper[5016]: I1011 07:41:26.939530 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:26Z","lastTransitionTime":"2025-10-11T07:41:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.041733 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.041798 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.041812 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.041828 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.041838 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.132892 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.132985 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.133031 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.133054 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.133092 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.133199 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.133359 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.133720 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.144481 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.144550 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.144568 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.144591 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.144608 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.246536 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.246902 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.246911 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.246925 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.246936 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.348647 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.348709 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.348722 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.348743 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.348757 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.422225 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.422265 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.422273 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.422288 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.422297 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.433712 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.437280 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.437329 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.437345 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.437366 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.437381 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.448249 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.451341 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.451388 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.451400 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.451414 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.451424 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.461917 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.464984 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.465135 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.465148 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.465165 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.465176 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.476574 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.479623 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.479664 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.479675 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.479687 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.479697 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.490773 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: E1011 07:41:27.490921 5016 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.492115 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.492145 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.492156 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.492170 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.492181 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.578408 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/0.log" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.578462 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lbbb2" event={"ID":"48e55d9a-f690-40ae-ba16-e91c4d9d3a95","Type":"ContainerStarted","Data":"fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.591687 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.594875 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.594959 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.595005 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.595038 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.595063 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.613591 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.624207 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.640355 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:25Z\\\",\\\"message\\\":\\\"2025-10-11T07:40:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a\\\\n2025-10-11T07:40:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a to /host/opt/cni/bin/\\\\n2025-10-11T07:40:40Z [verbose] multus-daemon started\\\\n2025-10-11T07:40:40Z [verbose] Readiness Indicator file check\\\\n2025-10-11T07:41:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.655340 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.667216 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.682491 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.698322 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.698378 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.698393 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.698413 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.698428 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.702915 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.714099 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.732688 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:04Z\\\",\\\"message\\\":\\\"er\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1011 07:41:03.972761 6716 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI1011 07:41:03.972772 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972783 6716 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972786 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1011 07:41:03.972764 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.743990 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.755802 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.771055 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.786200 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.801207 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.801245 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.801257 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.801271 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.801283 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.802057 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.810835 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.820930 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:27Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.904176 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.904209 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.904218 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.904231 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:27 crc kubenswrapper[5016]: I1011 07:41:27.904239 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:27Z","lastTransitionTime":"2025-10-11T07:41:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.007069 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.007110 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.007121 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.007156 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.007170 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.109347 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.109392 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.109403 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.109423 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.109436 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.134424 5016 scope.go:117] "RemoveContainer" containerID="93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.212263 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.212342 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.212358 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.212375 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.212388 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.315459 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.315525 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.315542 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.315984 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.316034 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.419163 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.419212 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.419230 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.419252 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.419283 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.521995 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.522040 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.522052 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.522077 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.522093 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.583149 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/2.log" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.585244 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.585780 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.599761 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.612759 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.620871 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.623829 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.623853 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.623861 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.623873 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.623882 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.630994 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.643345 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.655076 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.665279 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.677590 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:25Z\\\",\\\"message\\\":\\\"2025-10-11T07:40:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a\\\\n2025-10-11T07:40:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a to /host/opt/cni/bin/\\\\n2025-10-11T07:40:40Z [verbose] multus-daemon started\\\\n2025-10-11T07:40:40Z [verbose] Readiness Indicator file check\\\\n2025-10-11T07:41:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.687412 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.699168 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.709368 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.725540 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.725589 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.725601 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.725622 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.725635 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.731546 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.753799 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:04Z\\\",\\\"message\\\":\\\"er\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1011 07:41:03.972761 6716 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI1011 07:41:03.972772 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972783 6716 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972786 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1011 07:41:03.972764 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.764742 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.777398 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.794634 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.804686 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.828433 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.828482 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.828495 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.828513 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.828524 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.931539 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.931591 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.931602 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.931617 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:28 crc kubenswrapper[5016]: I1011 07:41:28.931632 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:28Z","lastTransitionTime":"2025-10-11T07:41:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.034208 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.034280 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.034292 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.034307 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.034317 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.133371 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.133425 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.133455 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:29 crc kubenswrapper[5016]: E1011 07:41:29.133568 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.133608 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:29 crc kubenswrapper[5016]: E1011 07:41:29.133792 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:29 crc kubenswrapper[5016]: E1011 07:41:29.133930 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:29 crc kubenswrapper[5016]: E1011 07:41:29.134177 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.136857 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.136890 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.136899 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.136914 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.136926 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.239997 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.240080 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.240104 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.240137 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.240159 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.343265 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.343335 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.343368 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.343399 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.343423 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.446259 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.446302 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.446313 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.446328 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.446340 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.548893 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.548966 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.548984 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.549006 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.549027 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.591527 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/3.log" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.594189 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/2.log" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.598299 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" exitCode=1 Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.598343 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.598380 5016 scope.go:117] "RemoveContainer" containerID="93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.599798 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:41:29 crc kubenswrapper[5016]: E1011 07:41:29.600128 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.618273 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.636902 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.651474 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.651515 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.651523 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.651537 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.651545 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.660378 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.674495 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.688963 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:25Z\\\",\\\"message\\\":\\\"2025-10-11T07:40:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a\\\\n2025-10-11T07:40:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a to /host/opt/cni/bin/\\\\n2025-10-11T07:40:40Z [verbose] multus-daemon started\\\\n2025-10-11T07:40:40Z [verbose] Readiness Indicator file check\\\\n2025-10-11T07:41:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.704931 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.719938 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.733644 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.752736 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.753631 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.753695 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.753707 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.753720 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.753730 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.765920 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.788353 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93085c3d68b77ab077ab7ad320e04ab22009c33f40e4f3676d2661e4d3455546\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:04Z\\\",\\\"message\\\":\\\"er\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1011 07:41:03.972761 6716 services_controller.go:452] Built service openshift-machine-config-operator/machine-config-operator per-node LB for network=default: []services.LB{}\\\\nI1011 07:41:03.972772 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972783 6716 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1011 07:41:03.972786 6716 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1011 07:41:03.972764 6716 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:28Z\\\",\\\"message\\\":\\\"4 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-d7sp7\\\\nI1011 07:41:28.951794 7074 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF1011 07:41:28.951791 7074 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:41:28.951791 7074 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-459lg\\\\nI1011 07:41:28.951813 7074 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-d7sp7\\\\nI1011 07:41:28.951818 7074 ovn.go:134]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.800893 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.812586 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.824926 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.838811 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.853517 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.856095 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.856145 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.856157 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.856176 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.856189 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.866069 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:29Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.958403 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.958440 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.958451 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.958467 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:29 crc kubenswrapper[5016]: I1011 07:41:29.958478 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:29Z","lastTransitionTime":"2025-10-11T07:41:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.060811 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.060852 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.060863 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.060880 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.060892 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.163185 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.163232 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.163244 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.163291 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.163306 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.266072 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.266123 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.266160 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.266178 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.266191 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.369193 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.369253 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.369263 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.369282 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.369295 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.472881 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.472935 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.472946 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.472963 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.472976 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.575093 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.575148 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.575163 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.575181 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.575194 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.604193 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/3.log" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.609088 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:41:30 crc kubenswrapper[5016]: E1011 07:41:30.609291 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.624025 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.646146 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.663380 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.678543 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.678629 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.678646 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.678722 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.678740 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.680060 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:25Z\\\",\\\"message\\\":\\\"2025-10-11T07:40:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a\\\\n2025-10-11T07:40:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a to /host/opt/cni/bin/\\\\n2025-10-11T07:40:40Z [verbose] multus-daemon started\\\\n2025-10-11T07:40:40Z [verbose] Readiness Indicator file check\\\\n2025-10-11T07:41:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.696389 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.711982 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.728710 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.747570 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.764892 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.782270 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.782318 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.782327 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.782345 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.782355 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.790188 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:28Z\\\",\\\"message\\\":\\\"4 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-d7sp7\\\\nI1011 07:41:28.951794 7074 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF1011 07:41:28.951791 7074 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:41:28.951791 7074 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-459lg\\\\nI1011 07:41:28.951813 7074 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-d7sp7\\\\nI1011 07:41:28.951818 7074 ovn.go:134]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.804169 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.818676 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.835777 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.851776 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.865723 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.879784 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.884799 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.884826 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.884835 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.884851 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.884861 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.895061 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:30Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.987395 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.987432 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.987441 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.987453 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:30 crc kubenswrapper[5016]: I1011 07:41:30.987462 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:30Z","lastTransitionTime":"2025-10-11T07:41:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.089984 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.090019 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.090029 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.090056 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.090067 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:31Z","lastTransitionTime":"2025-10-11T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.133026 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.133085 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.133057 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.133111 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:31 crc kubenswrapper[5016]: E1011 07:41:31.133190 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:31 crc kubenswrapper[5016]: E1011 07:41:31.133282 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:31 crc kubenswrapper[5016]: E1011 07:41:31.133409 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:31 crc kubenswrapper[5016]: E1011 07:41:31.133451 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.193469 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.193548 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.193567 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.193596 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.193618 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:31Z","lastTransitionTime":"2025-10-11T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.296602 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.296680 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.296695 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.296713 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.296729 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:31Z","lastTransitionTime":"2025-10-11T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.399902 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.400341 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.400536 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.400746 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.400930 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:31Z","lastTransitionTime":"2025-10-11T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.505589 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.505645 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.505683 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.505704 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.505720 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:31Z","lastTransitionTime":"2025-10-11T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.608728 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.608797 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.608815 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.608838 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.608853 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:31Z","lastTransitionTime":"2025-10-11T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.712171 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.712225 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.712237 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.712258 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.712273 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:31Z","lastTransitionTime":"2025-10-11T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.816401 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.816463 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.816479 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.816503 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.816520 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:31Z","lastTransitionTime":"2025-10-11T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.919458 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.919500 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.919510 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.919526 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:31 crc kubenswrapper[5016]: I1011 07:41:31.919539 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:31Z","lastTransitionTime":"2025-10-11T07:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.022081 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.022130 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.022145 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.022165 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.022183 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.124911 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.124998 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.125024 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.125057 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.125081 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.228721 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.228778 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.228789 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.228806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.228818 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.331414 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.331493 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.331512 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.331538 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.331552 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.434569 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.434643 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.434711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.434743 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.434764 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.537625 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.537708 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.537723 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.537744 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.537759 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.640050 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.640115 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.640132 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.640156 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.640173 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.743179 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.743244 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.743260 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.743284 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.743302 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.846212 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.846244 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.846253 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.846265 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.846273 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.948625 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.948714 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.948724 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.948741 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:32 crc kubenswrapper[5016]: I1011 07:41:32.948751 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:32Z","lastTransitionTime":"2025-10-11T07:41:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.051871 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.051912 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.051923 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.051938 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.051948 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.132721 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.132819 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:33 crc kubenswrapper[5016]: E1011 07:41:33.132949 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.133018 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.133036 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:33 crc kubenswrapper[5016]: E1011 07:41:33.133184 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:33 crc kubenswrapper[5016]: E1011 07:41:33.133320 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:33 crc kubenswrapper[5016]: E1011 07:41:33.133463 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.152702 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.154679 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.154727 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.154745 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.154770 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.154785 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.174418 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.193527 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.208141 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.218624 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.230928 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.242062 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.255038 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.257299 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.257338 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.257348 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.257369 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.257379 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.280060 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.294732 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.310398 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:25Z\\\",\\\"message\\\":\\\"2025-10-11T07:40:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a\\\\n2025-10-11T07:40:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a to /host/opt/cni/bin/\\\\n2025-10-11T07:40:40Z [verbose] multus-daemon started\\\\n2025-10-11T07:40:40Z [verbose] Readiness Indicator file check\\\\n2025-10-11T07:41:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.325091 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.340101 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.357225 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.359425 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.359466 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.359478 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.359496 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.359508 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.369303 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.389305 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:28Z\\\",\\\"message\\\":\\\"4 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-d7sp7\\\\nI1011 07:41:28.951794 7074 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF1011 07:41:28.951791 7074 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:41:28.951791 7074 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-459lg\\\\nI1011 07:41:28.951813 7074 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-d7sp7\\\\nI1011 07:41:28.951818 7074 ovn.go:134]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.399268 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:33Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.462416 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.462451 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.462461 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.462473 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.462483 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.565235 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.565280 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.565288 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.565304 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.565315 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.668690 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.668757 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.668782 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.668811 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.668833 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.772255 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.772309 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.772327 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.772352 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.772372 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.875082 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.875143 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.875165 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.875191 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.875212 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.978014 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.978070 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.978087 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.978112 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:33 crc kubenswrapper[5016]: I1011 07:41:33.978127 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:33Z","lastTransitionTime":"2025-10-11T07:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.081435 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.081520 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.081547 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.081581 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.081615 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:34Z","lastTransitionTime":"2025-10-11T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.183950 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.184016 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.184040 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.184064 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.184080 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:34Z","lastTransitionTime":"2025-10-11T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.286138 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.286678 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.286745 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.286822 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.286887 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:34Z","lastTransitionTime":"2025-10-11T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.389402 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.389702 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.389797 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.389891 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.389972 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:34Z","lastTransitionTime":"2025-10-11T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.493297 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.493396 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.493419 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.493445 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.493465 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:34Z","lastTransitionTime":"2025-10-11T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.596734 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.596768 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.596776 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.596795 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.596804 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:34Z","lastTransitionTime":"2025-10-11T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.700607 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.701080 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.701113 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.701144 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.701166 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:34Z","lastTransitionTime":"2025-10-11T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.804285 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.804333 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.804353 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.804376 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.804388 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:34Z","lastTransitionTime":"2025-10-11T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.908018 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.908096 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.908120 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.908151 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:34 crc kubenswrapper[5016]: I1011 07:41:34.908175 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:34Z","lastTransitionTime":"2025-10-11T07:41:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.010741 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.010995 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.011069 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.011172 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.011245 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.114643 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.114733 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.114745 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.114768 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.114786 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.133046 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.133137 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.133217 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:35 crc kubenswrapper[5016]: E1011 07:41:35.133234 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:35 crc kubenswrapper[5016]: E1011 07:41:35.133339 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.133486 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:35 crc kubenswrapper[5016]: E1011 07:41:35.133487 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:35 crc kubenswrapper[5016]: E1011 07:41:35.133636 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.218270 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.218331 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.218342 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.218359 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.218370 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.322348 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.322415 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.322428 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.322452 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.322468 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.425770 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.425847 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.425867 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.425891 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.425908 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.528755 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.528792 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.528801 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.528814 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.528822 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.630729 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.630769 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.630776 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.630789 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.630798 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.734625 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.734724 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.734743 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.734769 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.734789 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.837996 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.838043 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.838059 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.838076 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.838086 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.940547 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.940623 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.940647 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.940730 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:35 crc kubenswrapper[5016]: I1011 07:41:35.940757 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:35Z","lastTransitionTime":"2025-10-11T07:41:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.043596 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.043686 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.043706 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.043727 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.043741 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.145536 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.145583 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.145597 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.145614 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.145626 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.248354 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.248388 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.248397 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.248412 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.248423 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.351249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.351322 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.351343 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.351373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.351397 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.454878 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.454957 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.454976 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.455000 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.455019 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.557266 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.557308 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.557321 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.557337 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.557376 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.660226 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.660273 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.660282 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.660298 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.660311 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.762939 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.762985 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.762995 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.763011 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.763021 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.865181 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.865211 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.865218 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.865230 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.865240 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.967824 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.967874 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.967891 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.967917 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:36 crc kubenswrapper[5016]: I1011 07:41:36.967934 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:36Z","lastTransitionTime":"2025-10-11T07:41:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.070292 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.070355 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.070373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.070397 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.070418 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.132627 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.132741 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.132743 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.132920 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.132905 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.133027 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.133151 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.133327 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.172480 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.172519 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.172530 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.172545 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.172557 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.275285 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.275402 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.275424 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.275446 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.275464 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.378594 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.378642 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.378687 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.378706 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.378720 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.482244 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.482291 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.482302 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.482319 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.482340 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.567401 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.567456 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.567474 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.567499 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.567518 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.588141 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.593496 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.593553 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.593578 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.593607 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.593632 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.613620 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.618067 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.618139 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.618159 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.618183 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.618201 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.636059 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.640526 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.640592 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.640611 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.640637 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.640676 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.654503 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.658206 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.658258 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.658275 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.658296 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.658311 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.672003 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:37Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:37 crc kubenswrapper[5016]: E1011 07:41:37.672187 5016 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.673699 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.673756 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.673769 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.673790 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.673803 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.777439 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.777486 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.777499 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.777515 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.777527 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.880230 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.880293 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.880316 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.880347 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.880368 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.982374 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.982433 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.982448 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.982469 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:37 crc kubenswrapper[5016]: I1011 07:41:37.982484 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:37Z","lastTransitionTime":"2025-10-11T07:41:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.084580 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.084611 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.084637 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.084666 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.084677 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:38Z","lastTransitionTime":"2025-10-11T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.187355 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.187392 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.187407 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.187423 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.187433 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:38Z","lastTransitionTime":"2025-10-11T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.290255 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.290341 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.290362 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.290387 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.290404 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:38Z","lastTransitionTime":"2025-10-11T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.393686 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.393753 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.393771 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.393795 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.393817 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:38Z","lastTransitionTime":"2025-10-11T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.497293 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.497397 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.497428 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.497469 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.497490 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:38Z","lastTransitionTime":"2025-10-11T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.600024 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.600066 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.600075 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.600090 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.600099 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:38Z","lastTransitionTime":"2025-10-11T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.702707 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.702758 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.702771 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.702795 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.702836 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:38Z","lastTransitionTime":"2025-10-11T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.805111 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.805156 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.805168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.805187 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.805199 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:38Z","lastTransitionTime":"2025-10-11T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.907909 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.907959 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.907971 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.907986 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.907997 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:38Z","lastTransitionTime":"2025-10-11T07:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.961794 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.961922 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.961958 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.961990 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.961967343 +0000 UTC m=+150.862423299 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.962023 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:38 crc kubenswrapper[5016]: I1011 07:41:38.962054 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962065 5016 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962138 5016 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962156 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.962139548 +0000 UTC m=+150.862595504 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962174 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.962164808 +0000 UTC m=+150.862620774 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962234 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962300 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962325 5016 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962351 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962421 5016 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962436 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.962401545 +0000 UTC m=+150.862857531 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962446 5016 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:41:38 crc kubenswrapper[5016]: E1011 07:41:38.962568 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.962533328 +0000 UTC m=+150.862989324 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.010806 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.010854 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.010866 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.010901 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.010913 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.115341 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.115397 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.115409 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.115430 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.115447 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.132544 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.132564 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.132580 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.132713 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:39 crc kubenswrapper[5016]: E1011 07:41:39.132867 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:39 crc kubenswrapper[5016]: E1011 07:41:39.133013 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:39 crc kubenswrapper[5016]: E1011 07:41:39.133057 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:39 crc kubenswrapper[5016]: E1011 07:41:39.133114 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.146028 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.217619 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.217714 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.217732 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.217753 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.217766 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.320096 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.320142 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.320157 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.320176 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.320190 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.422733 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.422798 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.422816 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.422840 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.422858 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.526038 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.526157 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.526175 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.526203 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.526222 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.629978 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.630074 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.630095 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.630119 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.630145 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.732864 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.732929 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.732951 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.732979 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.732997 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.835041 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.835075 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.835083 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.835096 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.835105 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.937447 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.937602 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.937623 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.937646 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:39 crc kubenswrapper[5016]: I1011 07:41:39.937690 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:39Z","lastTransitionTime":"2025-10-11T07:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.040917 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.040979 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.040995 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.041018 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.041036 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.143945 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.143984 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.143992 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.144011 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.144030 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.247245 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.247275 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.247284 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.247299 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.247309 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.349908 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.349954 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.349966 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.349982 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.349996 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.453168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.453250 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.453275 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.453313 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.453337 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.556239 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.556298 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.556314 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.556335 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.556350 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.659089 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.659127 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.659137 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.659153 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.659163 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.761727 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.761812 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.761834 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.761863 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.761886 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.864341 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.864416 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.864436 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.864468 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.864552 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.967744 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.967786 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.967797 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.967816 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:40 crc kubenswrapper[5016]: I1011 07:41:40.967827 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:40Z","lastTransitionTime":"2025-10-11T07:41:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.070607 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.070682 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.070733 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.070753 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.070767 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.133866 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.133915 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.133923 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:41 crc kubenswrapper[5016]: E1011 07:41:41.134015 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.134093 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:41 crc kubenswrapper[5016]: E1011 07:41:41.134228 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:41 crc kubenswrapper[5016]: E1011 07:41:41.134310 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:41 crc kubenswrapper[5016]: E1011 07:41:41.134420 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.173572 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.173624 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.173637 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.173719 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.173739 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.276839 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.276893 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.276906 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.276924 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.276937 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.380157 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.380340 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.380394 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.380416 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.380432 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.482983 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.483012 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.483045 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.483058 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.483066 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.585919 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.585976 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.585994 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.586018 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.586036 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.688813 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.689080 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.689092 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.689107 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.689119 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.792419 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.792477 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.792492 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.792516 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.792532 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.895089 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.895148 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.895158 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.895173 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.895183 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.997603 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.997634 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.997642 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.997671 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:41 crc kubenswrapper[5016]: I1011 07:41:41.997680 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:41Z","lastTransitionTime":"2025-10-11T07:41:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.099895 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.099955 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.099972 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.099996 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.100013 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:42Z","lastTransitionTime":"2025-10-11T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.202962 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.202995 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.203004 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.203017 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.203025 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:42Z","lastTransitionTime":"2025-10-11T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.305269 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.305306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.305315 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.305328 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.305336 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:42Z","lastTransitionTime":"2025-10-11T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.408580 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.408675 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.408690 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.408717 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.408732 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:42Z","lastTransitionTime":"2025-10-11T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.511445 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.511477 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.511486 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.511499 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.511508 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:42Z","lastTransitionTime":"2025-10-11T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.613953 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.613987 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.613996 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.614015 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.614023 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:42Z","lastTransitionTime":"2025-10-11T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.716865 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.716920 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.716935 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.716956 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.716973 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:42Z","lastTransitionTime":"2025-10-11T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.819110 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.819189 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.819207 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.819230 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.819247 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:42Z","lastTransitionTime":"2025-10-11T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.921629 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.921704 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.921717 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.921735 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:42 crc kubenswrapper[5016]: I1011 07:41:42.921746 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:42Z","lastTransitionTime":"2025-10-11T07:41:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.024710 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.024743 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.024752 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.024764 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.024774 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.126271 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.126324 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.126339 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.126360 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.126376 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.132855 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:43 crc kubenswrapper[5016]: E1011 07:41:43.133027 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.133058 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.133093 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.133707 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:43 crc kubenswrapper[5016]: E1011 07:41:43.133756 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:43 crc kubenswrapper[5016]: E1011 07:41:43.133866 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:43 crc kubenswrapper[5016]: E1011 07:41:43.133917 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.134458 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:41:43 crc kubenswrapper[5016]: E1011 07:41:43.135026 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.147878 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.167315 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"917a6581-31ec-4abc-9543-652c8295144f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db26be15889ed64b2e7425d6f0c404c277bd8638a0b18c0b541d44ef0853849a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb72f32a44458c84a0a471394bb56a21482c1fcb52c0651d0ded37e8a74ae82d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029bbb43d89b3f785c23d3d3e39947868aa89248da4644f3ee510a6125022df\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dff3f0d1bb780948cbf0968f324e758a24b563742582c0fbbf6882ce54fd4950\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6fcb33f7818b53d77ad8af35386745ded79a1261957944ab1852fb5a9e245cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://507e8aefd4dbb67ed7eaff20a10f31c0f858a208240c74f1e60595b37d4319fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3070c7236b69749302951f1733c2e6e809b0eb97bae20567d44cd04937cd3de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2khg4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xcmjb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.180307 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0633ed26-7b6a-4a20-92ba-569891d9faff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74f398b069aee204298393c1f78cb2b983535802a2ba674ffc52922035ac333\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm9zd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-49bvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.197613 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68e9f942-5043-4fc3-9133-b608e8cd4ac0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:28Z\\\",\\\"message\\\":\\\"4 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-d7sp7\\\\nI1011 07:41:28.951794 7074 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nF1011 07:41:28.951791 7074 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:28Z is after 2025-08-24T17:21:41Z]\\\\nI1011 07:41:28.951791 7074 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-459lg\\\\nI1011 07:41:28.951813 7074 obj_retry.go:365] Adding new object: *v1.Pod openshift-dns/node-resolver-d7sp7\\\\nI1011 07:41:28.951818 7074 ovn.go:134]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:41:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sg9zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-79nv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.209540 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-459lg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ceaf34e-81b3-457f-8f03-d807f795392b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tc8pn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-459lg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.220751 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5b11a68-48a2-4a5e-975c-1021267eb7b3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7677cc3fcf371609213b8490b20351b66e7feffafbdcf5eec9d9692b47258ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://311b1b98fad7e3301d80bb2bfd9b69e74a77010ce700f3c2e3871d4a68d08b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dced16483a9f55747625898c4515422fc7697cc1a47c32f50e2389d1b430ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04bb6b6ccf92ac9865f5a3189d30a6583e3d49f8ab7823b1b89f79e297a13792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.228989 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.229062 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.229090 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.229112 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.229128 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.231353 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dc43cab-c2c9-4d7f-b908-4f86d1d5e3b0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://516f350b59aaf3bf09a0f57f5e320b94e2d31b696055b3f1095b16fb6ca62bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de2507ca9c487e1b435773006b5f21fbebe10d357449235d97aee3a26e44b545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de2507ca9c487e1b435773006b5f21fbebe10d357449235d97aee3a26e44b545\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.243693 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f1ef3ae5771425106855c78a7f40b004e4f35ec42b264cc804312ae024ffb88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.254691 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.270819 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c76b1550a01f53df7f947971ab892d0f6b509d66f770a25e576c60666afb311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8c361333fb07e54e6c6d852376c4d3de84d85f28f2a162fbb5e0e9293de2091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.286276 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-d7sp7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bf81fbe-9695-4fc9-a46d-d7700f56e894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6074b44f58adf98804e945e0c424cf60ad33ff2fdfce1ffd253dcf593fb672a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x962l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-d7sp7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.297151 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.317080 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c6c3028-a67b-441c-a7db-adb494840054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052cf969ede798da08d5b097daaf1424d6ccad8eaa6699500e1d0cfe15a5625e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d6c454ac0533b0f690a8d17eaec62d8fb26b02233b00f87e7fc5c03ef3790eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5052a1db05bf3dcb763f3a9e0cf9f74a9d7ad74c5a7a0baf52ec94281c67f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ba5ca7a8267df0d4a27aee36399c69320463dd65eb9841d5ce11a25fe8ba7e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.331303 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.331350 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.331361 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.331380 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.331393 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.340202 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5922f8f5-3faa-439f-b76e-32829c0b3f22\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9637ffdecd7df8222d72647266c14d4e978c9f8977ac00745ab27ce08d910dad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0fff1eb9dabcbc1fd51c3da8c78e35123019c8a31c619dbc968e7630f36b95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc828a56d5953f66a31c46937d30cf668e5dea64df15361bc741c68a8e0a876a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78849c3909228621690a66bd1e9a25eefa34107846399956602bf5ef04e9f86c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cd7e74d0129d79785f57f949025b118ab1d8529107954da8835301d1587c00\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-11T07:40:35Z\\\",\\\"message\\\":\\\"le observer\\\\nW1011 07:40:34.896757 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1011 07:40:34.896878 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1011 07:40:34.897475 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3290972782/tls.crt::/tmp/serving-cert-3290972782/tls.key\\\\\\\"\\\\nI1011 07:40:35.355966 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1011 07:40:35.365105 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1011 07:40:35.365137 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1011 07:40:35.365168 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1011 07:40:35.365174 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1011 07:40:35.376203 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1011 07:40:35.376446 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376452 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1011 07:40:35.376456 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1011 07:40:35.376459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1011 07:40:35.376461 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1011 07:40:35.376464 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1011 07:40:35.376489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1011 07:40:35.377967 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94ef5852500821549584b72f449348a8f55384834dc0f103fed54fa1dcf5962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:16Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b450c0b7eadcf63d6318b95db1b5b1ace46e4f46a99de503f454778dcacb799e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-11T07:40:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.354520 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36a9f6d9f237a24091dd7f5ad1ce8708ba914d521a34c298f6c0154925e1b60c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.368983 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lbbb2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48e55d9a-f690-40ae-ba16-e91c4d9d3a95\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-11T07:41:25Z\\\",\\\"message\\\":\\\"2025-10-11T07:40:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a\\\\n2025-10-11T07:40:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_e6443a36-1a23-450e-8373-2da2a4518c5a to /host/opt/cni/bin/\\\\n2025-10-11T07:40:40Z [verbose] multus-daemon started\\\\n2025-10-11T07:40:40Z [verbose] Readiness Indicator file check\\\\n2025-10-11T07:41:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-11T07:40:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:41:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m72h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lbbb2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.379579 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jk9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a66833c-ffa6-4af6-9e15-90e24db9a290\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0cfcd0f6daf8c8d6e0ed85fde423e1e0596b01a867311246d43e288ce371985\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-km9bc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jk9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.389924 5016 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac4c83c5-2d5a-49fb-b2f3-43ab267dcd99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-11T07:40:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73a11cae0088711f12e7156914900e7e3c3641a0332763bb03f9457e09826635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42f97ef603cbc091e8d644c74b1e87734defad234b450ed334ca79bacdcd772e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-11T07:40:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gnql6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-11T07:40:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2r66f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:43Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.433761 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.433809 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.433821 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.433842 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.433854 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.536449 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.536493 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.536505 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.536520 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.536533 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.638569 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.638608 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.638618 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.638636 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.638740 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.740464 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.740507 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.740516 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.740532 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.740541 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.843001 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.843045 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.843056 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.843071 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.843082 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.945116 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.945166 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.945180 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.945198 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:43 crc kubenswrapper[5016]: I1011 07:41:43.945213 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:43Z","lastTransitionTime":"2025-10-11T07:41:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.048520 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.048576 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.048594 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.048614 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.048630 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.150876 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.150943 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.150961 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.150986 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.151004 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.253943 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.254005 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.254022 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.254044 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.254062 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.357221 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.357306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.357322 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.357343 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.357361 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.459991 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.460036 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.460046 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.460060 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.460070 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.562244 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.562294 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.562310 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.562332 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.562356 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.664086 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.664118 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.664126 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.664138 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.664149 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.766524 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.766570 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.766581 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.766598 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.766611 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.869207 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.869286 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.869305 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.869330 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.869348 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.972242 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.972295 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.972306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.972330 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:44 crc kubenswrapper[5016]: I1011 07:41:44.972343 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:44Z","lastTransitionTime":"2025-10-11T07:41:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.075044 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.075080 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.075088 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.075101 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.075109 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:45Z","lastTransitionTime":"2025-10-11T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.132917 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.132988 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.133021 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.132936 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:45 crc kubenswrapper[5016]: E1011 07:41:45.133064 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:45 crc kubenswrapper[5016]: E1011 07:41:45.133392 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:45 crc kubenswrapper[5016]: E1011 07:41:45.133469 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:45 crc kubenswrapper[5016]: E1011 07:41:45.133569 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.177336 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.177373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.177383 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.177396 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.177404 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:45Z","lastTransitionTime":"2025-10-11T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.279455 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.279536 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.279564 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.279596 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.279618 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:45Z","lastTransitionTime":"2025-10-11T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.382457 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.382531 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.382540 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.382555 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.382567 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:45Z","lastTransitionTime":"2025-10-11T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.485262 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.485313 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.485325 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.485343 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.485356 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:45Z","lastTransitionTime":"2025-10-11T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.588302 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.588354 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.588370 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.588390 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.588404 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:45Z","lastTransitionTime":"2025-10-11T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.691598 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.691677 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.691689 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.691705 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.691720 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:45Z","lastTransitionTime":"2025-10-11T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.794884 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.794942 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.794960 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.794984 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.795005 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:45Z","lastTransitionTime":"2025-10-11T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.898142 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.898187 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.898199 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.898221 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:45 crc kubenswrapper[5016]: I1011 07:41:45.898233 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:45Z","lastTransitionTime":"2025-10-11T07:41:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.001168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.001410 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.001466 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.001496 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.001514 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.104098 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.104148 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.104162 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.104181 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.104195 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.145991 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.207156 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.207194 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.207203 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.207218 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.207230 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.309972 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.310006 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.310014 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.310029 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.310037 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.412800 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.412870 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.412884 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.412908 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.412923 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.515183 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.515226 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.515234 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.515249 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.515260 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.617694 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.617755 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.617763 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.617779 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.617806 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.721001 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.721117 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.721170 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.721209 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.721288 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.824785 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.824837 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.824848 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.824865 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.824875 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.927547 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.927599 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.927610 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.927639 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:46 crc kubenswrapper[5016]: I1011 07:41:46.927671 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:46Z","lastTransitionTime":"2025-10-11T07:41:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.030829 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.030871 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.030879 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.030896 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.030911 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.132375 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.132415 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.132391 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:47 crc kubenswrapper[5016]: E1011 07:41:47.132518 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:47 crc kubenswrapper[5016]: E1011 07:41:47.132629 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:47 crc kubenswrapper[5016]: E1011 07:41:47.132779 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.132829 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:47 crc kubenswrapper[5016]: E1011 07:41:47.133370 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.138002 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.138042 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.138053 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.138546 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.138573 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.240903 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.240938 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.240947 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.240959 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.240968 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.343867 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.343900 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.343908 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.343923 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.343931 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.446566 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.446691 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.446709 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.446731 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.446747 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.550101 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.550157 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.550174 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.550196 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.550213 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.652543 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.652582 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.652592 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.652608 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.652620 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.755035 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.755076 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.755087 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.755103 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.755115 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.857045 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.857089 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.857103 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.857123 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.857138 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.932222 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.932257 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.932275 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.932291 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.932303 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: E1011 07:41:47.944458 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.947802 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.947833 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.947844 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.947859 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.947870 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: E1011 07:41:47.961190 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.964692 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.964735 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.964753 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.964774 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.964787 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: E1011 07:41:47.978813 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.982254 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.982307 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.982321 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.982335 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.982345 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:47 crc kubenswrapper[5016]: E1011 07:41:47.994202 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:47Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.997580 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.997621 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.997632 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.997648 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:47 crc kubenswrapper[5016]: I1011 07:41:47.997684 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:47Z","lastTransitionTime":"2025-10-11T07:41:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: E1011 07:41:48.008635 5016 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-11T07:41:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f4122745-3248-41a5-a5a4-4bfabc330a61\\\",\\\"systemUUID\\\":\\\"08126ab1-62b0-4804-a043-8168875482af\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-11T07:41:48Z is after 2025-08-24T17:21:41Z" Oct 11 07:41:48 crc kubenswrapper[5016]: E1011 07:41:48.008906 5016 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.010428 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.010461 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.010470 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.010483 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.010495 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.112695 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.112762 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.112771 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.112783 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.112795 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.215322 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.215383 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.215396 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.215413 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.215421 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.317024 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.317056 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.317081 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.317094 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.317103 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.419484 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.419518 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.419527 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.419541 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.419558 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.522220 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.522261 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.522269 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.522286 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.522295 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.625162 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.625198 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.625207 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.625221 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.625230 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.731566 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.731814 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.731835 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.731854 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.731866 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.834488 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.834518 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.834529 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.834550 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.834564 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.936684 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.936763 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.936780 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.937175 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:48 crc kubenswrapper[5016]: I1011 07:41:48.937229 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:48Z","lastTransitionTime":"2025-10-11T07:41:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.040265 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.040304 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.040315 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.040331 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.040342 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.132258 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:49 crc kubenswrapper[5016]: E1011 07:41:49.132402 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.132611 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:49 crc kubenswrapper[5016]: E1011 07:41:49.132715 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.132858 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:49 crc kubenswrapper[5016]: E1011 07:41:49.132932 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.133165 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:49 crc kubenswrapper[5016]: E1011 07:41:49.133374 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.143276 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.143338 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.143355 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.143377 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.143393 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.246552 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.246643 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.246697 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.246733 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.246756 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.350616 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.350730 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.350749 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.350811 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.350829 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.454629 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.454709 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.454726 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.454754 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.454777 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.557733 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.557783 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.557794 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.557812 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.557825 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.660363 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.660458 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.660494 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.660526 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.660550 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.763515 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.763581 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.763604 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.763631 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.763680 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.866223 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.866275 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.866291 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.866312 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.866330 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.969091 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.969129 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.969141 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.969158 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:49 crc kubenswrapper[5016]: I1011 07:41:49.969171 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:49Z","lastTransitionTime":"2025-10-11T07:41:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.071824 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.071863 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.071873 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.071889 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.071900 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.173725 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.173769 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.173779 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.173795 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.173806 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.276359 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.276413 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.276428 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.276448 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.276467 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.379609 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.379668 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.379677 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.379692 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.379700 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.482855 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.482920 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.482930 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.482944 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.482953 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.585373 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.585460 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.585478 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.585501 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.585518 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.687384 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.687474 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.687491 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.687516 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.687534 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.790861 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.790924 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.790936 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.790954 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.790966 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.894213 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.894292 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.894310 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.894332 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.894349 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.998332 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.998403 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.998422 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.998451 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:50 crc kubenswrapper[5016]: I1011 07:41:50.998470 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:50Z","lastTransitionTime":"2025-10-11T07:41:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.101205 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.101266 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.101283 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.101306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.101322 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:51Z","lastTransitionTime":"2025-10-11T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.133415 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.133588 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.133605 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.133633 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:51 crc kubenswrapper[5016]: E1011 07:41:51.133840 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:51 crc kubenswrapper[5016]: E1011 07:41:51.133948 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:51 crc kubenswrapper[5016]: E1011 07:41:51.134062 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:51 crc kubenswrapper[5016]: E1011 07:41:51.134117 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.203899 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.203951 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.203987 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.204007 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.204022 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:51Z","lastTransitionTime":"2025-10-11T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.307182 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.307250 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.307273 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.307304 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.307329 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:51Z","lastTransitionTime":"2025-10-11T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.410168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.410243 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.410256 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.410272 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.410284 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:51Z","lastTransitionTime":"2025-10-11T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.513489 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.513768 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.513853 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.513920 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.513991 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:51Z","lastTransitionTime":"2025-10-11T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.616564 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.616672 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.616688 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.616707 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.616718 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:51Z","lastTransitionTime":"2025-10-11T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.719938 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.719978 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.719989 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.720006 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.720018 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:51Z","lastTransitionTime":"2025-10-11T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.822147 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.822192 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.822202 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.822217 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.822226 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:51Z","lastTransitionTime":"2025-10-11T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.924449 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.924744 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.924843 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.924916 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:51 crc kubenswrapper[5016]: I1011 07:41:51.924980 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:51Z","lastTransitionTime":"2025-10-11T07:41:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.026829 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.026868 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.026877 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.026890 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.026900 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.129001 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.129233 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.129306 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.129371 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.129443 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.232465 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.232747 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.232847 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.232967 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.232988 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.335107 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.335143 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.335156 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.335173 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.335185 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.437081 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.437134 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.437145 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.437166 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.437179 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.539711 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.539796 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.539816 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.539842 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.539859 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.643039 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.643090 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.643108 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.643130 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.643146 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.746264 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.746310 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.746326 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.746347 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.746363 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.849579 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.849647 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.849681 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.849706 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.849723 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.956811 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.956906 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.956924 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.956979 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:52 crc kubenswrapper[5016]: I1011 07:41:52.956998 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:52Z","lastTransitionTime":"2025-10-11T07:41:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.060014 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.060058 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.060073 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.060091 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.060103 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.133129 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.133206 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:53 crc kubenswrapper[5016]: E1011 07:41:53.133820 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.133374 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.133252 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:53 crc kubenswrapper[5016]: E1011 07:41:53.134040 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:53 crc kubenswrapper[5016]: E1011 07:41:53.134762 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:53 crc kubenswrapper[5016]: E1011 07:41:53.134912 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.162458 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.162726 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.162809 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.162871 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.162931 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.174014 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jk9cl" podStartSLOduration=79.173985771 podStartE2EDuration="1m19.173985771s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.159951975 +0000 UTC m=+101.060407931" watchObservedRunningTime="2025-10-11 07:41:53.173985771 +0000 UTC m=+101.074441737" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.174328 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2r66f" podStartSLOduration=78.17431953 podStartE2EDuration="1m18.17431953s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.173764125 +0000 UTC m=+101.074220081" watchObservedRunningTime="2025-10-11 07:41:53.17431953 +0000 UTC m=+101.074775486" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.191086 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.191062862 podStartE2EDuration="51.191062862s" podCreationTimestamp="2025-10-11 07:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.190770184 +0000 UTC m=+101.091226150" watchObservedRunningTime="2025-10-11 07:41:53.191062862 +0000 UTC m=+101.091518828" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.218101 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=7.218084005 podStartE2EDuration="7.218084005s" podCreationTimestamp="2025-10-11 07:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.217396056 +0000 UTC m=+101.117852002" watchObservedRunningTime="2025-10-11 07:41:53.218084005 +0000 UTC m=+101.118539951" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.250687 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.250642954 podStartE2EDuration="1m18.250642954s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.233922672 +0000 UTC m=+101.134378618" watchObservedRunningTime="2025-10-11 07:41:53.250642954 +0000 UTC m=+101.151098910" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.266673 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.266717 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.266729 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.266745 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.266757 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.295812 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lbbb2" podStartSLOduration=79.295790448 podStartE2EDuration="1m19.295790448s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.269182027 +0000 UTC m=+101.169638003" watchObservedRunningTime="2025-10-11 07:41:53.295790448 +0000 UTC m=+101.196246404" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.315166 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xcmjb" podStartSLOduration=79.315144705 podStartE2EDuration="1m19.315144705s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.315057292 +0000 UTC m=+101.215513258" watchObservedRunningTime="2025-10-11 07:41:53.315144705 +0000 UTC m=+101.215600651" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.354312 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podStartSLOduration=79.354286899 podStartE2EDuration="1m19.354286899s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.327923665 +0000 UTC m=+101.228379621" watchObservedRunningTime="2025-10-11 07:41:53.354286899 +0000 UTC m=+101.254742855" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.368839 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.368898 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.368914 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.368935 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.369003 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.390061 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=75.390024777 podStartE2EDuration="1m15.390024777s" podCreationTimestamp="2025-10-11 07:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.389778881 +0000 UTC m=+101.290234837" watchObservedRunningTime="2025-10-11 07:41:53.390024777 +0000 UTC m=+101.290480733" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.390855 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-d7sp7" podStartSLOduration=79.390849131 podStartE2EDuration="1m19.390849131s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.377807612 +0000 UTC m=+101.278263578" watchObservedRunningTime="2025-10-11 07:41:53.390849131 +0000 UTC m=+101.291305087" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.402033 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=14.402022426 podStartE2EDuration="14.402022426s" podCreationTimestamp="2025-10-11 07:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:53.40145058 +0000 UTC m=+101.301906546" watchObservedRunningTime="2025-10-11 07:41:53.402022426 +0000 UTC m=+101.302478382" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.471127 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.471168 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.471177 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.471192 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.471201 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.573124 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.573173 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.573183 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.573200 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.573211 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.676418 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.676503 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.676525 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.676555 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.676580 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.779707 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.779762 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.779779 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.779802 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.779819 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.882540 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.882624 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.882644 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.882734 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.882769 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.931931 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:53 crc kubenswrapper[5016]: E1011 07:41:53.932113 5016 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:41:53 crc kubenswrapper[5016]: E1011 07:41:53.932179 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs podName:9ceaf34e-81b3-457f-8f03-d807f795392b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:57.932159436 +0000 UTC m=+165.832615402 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs") pod "network-metrics-daemon-459lg" (UID: "9ceaf34e-81b3-457f-8f03-d807f795392b") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.986154 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.986199 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.986213 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.986230 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:53 crc kubenswrapper[5016]: I1011 07:41:53.986244 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:53Z","lastTransitionTime":"2025-10-11T07:41:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.089389 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.089465 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.089485 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.089530 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.089555 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:54Z","lastTransitionTime":"2025-10-11T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.192897 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.192955 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.192966 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.192985 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.192997 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:54Z","lastTransitionTime":"2025-10-11T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.295922 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.295990 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.296008 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.296025 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.296038 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:54Z","lastTransitionTime":"2025-10-11T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.398988 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.399031 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.399043 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.399065 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.399077 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:54Z","lastTransitionTime":"2025-10-11T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.502463 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.502545 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.502562 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.502589 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.502608 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:54Z","lastTransitionTime":"2025-10-11T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.606175 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.606290 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.606316 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.606348 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.606370 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:54Z","lastTransitionTime":"2025-10-11T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.710139 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.710209 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.710227 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.710262 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.710287 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:54Z","lastTransitionTime":"2025-10-11T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.813690 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.813761 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.813785 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.813815 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.813835 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:54Z","lastTransitionTime":"2025-10-11T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.916198 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.916226 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.916233 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.916246 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:54 crc kubenswrapper[5016]: I1011 07:41:54.916254 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:54Z","lastTransitionTime":"2025-10-11T07:41:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.019060 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.019130 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.019143 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.019162 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.019175 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.122195 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.122262 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.122286 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.122320 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.122364 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.132956 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.133068 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:55 crc kubenswrapper[5016]: E1011 07:41:55.133120 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:55 crc kubenswrapper[5016]: E1011 07:41:55.133235 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.133370 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:55 crc kubenswrapper[5016]: E1011 07:41:55.133494 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.133527 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:55 crc kubenswrapper[5016]: E1011 07:41:55.133637 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.225770 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.225818 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.225829 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.225848 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.225861 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.328933 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.328986 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.329003 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.329025 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.329043 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.431688 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.431720 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.431731 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.431746 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.431757 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.534355 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.534401 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.534409 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.534423 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.534432 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.637723 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.638256 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.638278 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.638308 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.638330 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.740330 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.740395 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.740412 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.740436 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.740455 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.843706 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.843800 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.843822 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.843855 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.843894 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.947002 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.947075 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.947101 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.947134 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:55 crc kubenswrapper[5016]: I1011 07:41:55.947160 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:55Z","lastTransitionTime":"2025-10-11T07:41:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.050732 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.050803 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.050822 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.050848 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.050865 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.133276 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:41:56 crc kubenswrapper[5016]: E1011 07:41:56.133480 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.152775 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.152816 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.152861 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.152878 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.152889 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.255988 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.256052 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.256072 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.256098 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.256116 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.359491 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.359543 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.359553 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.359572 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.359585 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.461808 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.462154 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.462462 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.462604 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.462832 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.565891 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.565963 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.565980 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.566008 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.566025 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.669209 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.669273 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.669292 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.669316 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.669334 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.772041 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.772116 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.772134 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.772159 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.772177 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.875476 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.875538 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.875557 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.875581 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.875597 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.978757 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.978814 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.978833 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.978857 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:56 crc kubenswrapper[5016]: I1011 07:41:56.978878 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:56Z","lastTransitionTime":"2025-10-11T07:41:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.082200 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.082261 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.082279 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.082308 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.082334 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:57Z","lastTransitionTime":"2025-10-11T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.132323 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.132455 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:57 crc kubenswrapper[5016]: E1011 07:41:57.132701 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.132736 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:57 crc kubenswrapper[5016]: E1011 07:41:57.132941 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.132842 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:57 crc kubenswrapper[5016]: E1011 07:41:57.133063 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:57 crc kubenswrapper[5016]: E1011 07:41:57.133293 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.184837 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.184881 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.184895 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.184913 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.184927 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:57Z","lastTransitionTime":"2025-10-11T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.287507 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.287572 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.287590 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.287615 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.287642 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:57Z","lastTransitionTime":"2025-10-11T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.390386 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.390434 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.390448 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.390468 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.390492 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:57Z","lastTransitionTime":"2025-10-11T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.494318 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.494445 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.494467 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.494604 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.494642 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:57Z","lastTransitionTime":"2025-10-11T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.598033 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.598119 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.598144 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.598174 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.598197 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:57Z","lastTransitionTime":"2025-10-11T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.701892 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.701939 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.701966 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.701990 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.702005 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:57Z","lastTransitionTime":"2025-10-11T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.804788 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.804844 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.804861 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.804879 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.804891 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:57Z","lastTransitionTime":"2025-10-11T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.906988 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.907021 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.907029 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.907042 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:57 crc kubenswrapper[5016]: I1011 07:41:57.907052 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:57Z","lastTransitionTime":"2025-10-11T07:41:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.009492 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.009535 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.009543 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.009559 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.009569 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:58Z","lastTransitionTime":"2025-10-11T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.024554 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.024605 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.024619 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.024640 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.024679 5016 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-11T07:41:58Z","lastTransitionTime":"2025-10-11T07:41:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.070454 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc"] Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.070934 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.072397 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.072958 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.073232 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.073518 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.176266 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.176562 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.176585 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.176605 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.176848 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.277894 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.277948 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.277972 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.277996 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.278019 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.278127 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.278216 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.279765 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.283941 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.311357 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/393f69cd-ad10-42a7-81cf-ae42fcd3a8e9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rgtfc\" (UID: \"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.385904 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" Oct 11 07:41:58 crc kubenswrapper[5016]: W1011 07:41:58.400891 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod393f69cd_ad10_42a7_81cf_ae42fcd3a8e9.slice/crio-44f5c907eb5e8ef75e0db579337a36de558e3999ab4412e9166ff46235dad3bc WatchSource:0}: Error finding container 44f5c907eb5e8ef75e0db579337a36de558e3999ab4412e9166ff46235dad3bc: Status 404 returned error can't find the container with id 44f5c907eb5e8ef75e0db579337a36de558e3999ab4412e9166ff46235dad3bc Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.704471 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" event={"ID":"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9","Type":"ContainerStarted","Data":"b98d5a8ccfabf65d8c5ddca2acf1afb8524e2b681c9108ab86413a1d3c7cbe56"} Oct 11 07:41:58 crc kubenswrapper[5016]: I1011 07:41:58.704541 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" event={"ID":"393f69cd-ad10-42a7-81cf-ae42fcd3a8e9","Type":"ContainerStarted","Data":"44f5c907eb5e8ef75e0db579337a36de558e3999ab4412e9166ff46235dad3bc"} Oct 11 07:41:59 crc kubenswrapper[5016]: I1011 07:41:59.132569 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:41:59 crc kubenswrapper[5016]: I1011 07:41:59.132701 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:41:59 crc kubenswrapper[5016]: I1011 07:41:59.132810 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:41:59 crc kubenswrapper[5016]: E1011 07:41:59.132936 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:41:59 crc kubenswrapper[5016]: I1011 07:41:59.132999 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:41:59 crc kubenswrapper[5016]: E1011 07:41:59.133066 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:41:59 crc kubenswrapper[5016]: E1011 07:41:59.133272 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:41:59 crc kubenswrapper[5016]: E1011 07:41:59.133347 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:01 crc kubenswrapper[5016]: I1011 07:42:01.133081 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:01 crc kubenswrapper[5016]: I1011 07:42:01.133096 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:01 crc kubenswrapper[5016]: I1011 07:42:01.133184 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:01 crc kubenswrapper[5016]: E1011 07:42:01.133347 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:01 crc kubenswrapper[5016]: I1011 07:42:01.133739 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:01 crc kubenswrapper[5016]: E1011 07:42:01.133747 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:01 crc kubenswrapper[5016]: E1011 07:42:01.134042 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:01 crc kubenswrapper[5016]: E1011 07:42:01.134076 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:03 crc kubenswrapper[5016]: I1011 07:42:03.132422 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:03 crc kubenswrapper[5016]: I1011 07:42:03.132494 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:03 crc kubenswrapper[5016]: I1011 07:42:03.132529 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:03 crc kubenswrapper[5016]: I1011 07:42:03.132546 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:03 crc kubenswrapper[5016]: E1011 07:42:03.134271 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:03 crc kubenswrapper[5016]: E1011 07:42:03.134308 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:03 crc kubenswrapper[5016]: E1011 07:42:03.134367 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:03 crc kubenswrapper[5016]: E1011 07:42:03.134411 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:05 crc kubenswrapper[5016]: I1011 07:42:05.132964 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:05 crc kubenswrapper[5016]: I1011 07:42:05.133029 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:05 crc kubenswrapper[5016]: I1011 07:42:05.132977 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:05 crc kubenswrapper[5016]: E1011 07:42:05.133132 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:05 crc kubenswrapper[5016]: I1011 07:42:05.133183 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:05 crc kubenswrapper[5016]: E1011 07:42:05.133483 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:05 crc kubenswrapper[5016]: E1011 07:42:05.133548 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:05 crc kubenswrapper[5016]: E1011 07:42:05.133647 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:07 crc kubenswrapper[5016]: I1011 07:42:07.133350 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:07 crc kubenswrapper[5016]: I1011 07:42:07.133423 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:07 crc kubenswrapper[5016]: I1011 07:42:07.133558 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:07 crc kubenswrapper[5016]: I1011 07:42:07.134567 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:07 crc kubenswrapper[5016]: E1011 07:42:07.134742 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:07 crc kubenswrapper[5016]: E1011 07:42:07.135001 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:07 crc kubenswrapper[5016]: E1011 07:42:07.135019 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:07 crc kubenswrapper[5016]: E1011 07:42:07.135077 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:07 crc kubenswrapper[5016]: I1011 07:42:07.135311 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:42:07 crc kubenswrapper[5016]: E1011 07:42:07.135711 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-79nv2_openshift-ovn-kubernetes(68e9f942-5043-4fc3-9133-b608e8cd4ac0)\"" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" Oct 11 07:42:09 crc kubenswrapper[5016]: I1011 07:42:09.132949 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:09 crc kubenswrapper[5016]: I1011 07:42:09.133022 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:09 crc kubenswrapper[5016]: E1011 07:42:09.133477 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:09 crc kubenswrapper[5016]: I1011 07:42:09.133131 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:09 crc kubenswrapper[5016]: E1011 07:42:09.133623 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:09 crc kubenswrapper[5016]: I1011 07:42:09.133105 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:09 crc kubenswrapper[5016]: E1011 07:42:09.133839 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:09 crc kubenswrapper[5016]: E1011 07:42:09.133947 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:11 crc kubenswrapper[5016]: I1011 07:42:11.133325 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:11 crc kubenswrapper[5016]: I1011 07:42:11.133453 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:11 crc kubenswrapper[5016]: I1011 07:42:11.133481 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:11 crc kubenswrapper[5016]: E1011 07:42:11.133542 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:11 crc kubenswrapper[5016]: I1011 07:42:11.133325 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:11 crc kubenswrapper[5016]: E1011 07:42:11.133675 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:11 crc kubenswrapper[5016]: E1011 07:42:11.133823 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:11 crc kubenswrapper[5016]: E1011 07:42:11.133955 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:12 crc kubenswrapper[5016]: I1011 07:42:12.755644 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/1.log" Oct 11 07:42:12 crc kubenswrapper[5016]: I1011 07:42:12.757565 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/0.log" Oct 11 07:42:12 crc kubenswrapper[5016]: I1011 07:42:12.757681 5016 generic.go:334] "Generic (PLEG): container finished" podID="48e55d9a-f690-40ae-ba16-e91c4d9d3a95" containerID="fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e" exitCode=1 Oct 11 07:42:12 crc kubenswrapper[5016]: I1011 07:42:12.757749 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lbbb2" event={"ID":"48e55d9a-f690-40ae-ba16-e91c4d9d3a95","Type":"ContainerDied","Data":"fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e"} Oct 11 07:42:12 crc kubenswrapper[5016]: I1011 07:42:12.757866 5016 scope.go:117] "RemoveContainer" containerID="39031035f83a73aae82a4aa182352670744e452ecd781abe055031070cfda428" Oct 11 07:42:12 crc kubenswrapper[5016]: I1011 07:42:12.758623 5016 scope.go:117] "RemoveContainer" containerID="fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e" Oct 11 07:42:12 crc kubenswrapper[5016]: E1011 07:42:12.758889 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lbbb2_openshift-multus(48e55d9a-f690-40ae-ba16-e91c4d9d3a95)\"" pod="openshift-multus/multus-lbbb2" podUID="48e55d9a-f690-40ae-ba16-e91c4d9d3a95" Oct 11 07:42:12 crc kubenswrapper[5016]: I1011 07:42:12.782059 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rgtfc" podStartSLOduration=98.782036617 podStartE2EDuration="1m38.782036617s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:41:58.717873376 +0000 UTC m=+106.618329392" watchObservedRunningTime="2025-10-11 07:42:12.782036617 +0000 UTC m=+120.682492573" Oct 11 07:42:13 crc kubenswrapper[5016]: E1011 07:42:13.136075 5016 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 11 07:42:13 crc kubenswrapper[5016]: I1011 07:42:13.136321 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:13 crc kubenswrapper[5016]: I1011 07:42:13.136635 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:13 crc kubenswrapper[5016]: I1011 07:42:13.136396 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:13 crc kubenswrapper[5016]: E1011 07:42:13.141002 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:13 crc kubenswrapper[5016]: I1011 07:42:13.141036 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:13 crc kubenswrapper[5016]: E1011 07:42:13.141146 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:13 crc kubenswrapper[5016]: E1011 07:42:13.141275 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:13 crc kubenswrapper[5016]: E1011 07:42:13.141346 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:13 crc kubenswrapper[5016]: E1011 07:42:13.271268 5016 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 11 07:42:13 crc kubenswrapper[5016]: I1011 07:42:13.763896 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/1.log" Oct 11 07:42:15 crc kubenswrapper[5016]: I1011 07:42:15.132508 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:15 crc kubenswrapper[5016]: I1011 07:42:15.132594 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:15 crc kubenswrapper[5016]: E1011 07:42:15.132623 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:15 crc kubenswrapper[5016]: I1011 07:42:15.132705 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:15 crc kubenswrapper[5016]: E1011 07:42:15.132836 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:15 crc kubenswrapper[5016]: I1011 07:42:15.132869 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:15 crc kubenswrapper[5016]: E1011 07:42:15.132992 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:15 crc kubenswrapper[5016]: E1011 07:42:15.133138 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:17 crc kubenswrapper[5016]: I1011 07:42:17.133415 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:17 crc kubenswrapper[5016]: I1011 07:42:17.133466 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:17 crc kubenswrapper[5016]: I1011 07:42:17.133475 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:17 crc kubenswrapper[5016]: I1011 07:42:17.134740 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:17 crc kubenswrapper[5016]: E1011 07:42:17.134948 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:17 crc kubenswrapper[5016]: E1011 07:42:17.135160 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:17 crc kubenswrapper[5016]: E1011 07:42:17.135996 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:17 crc kubenswrapper[5016]: E1011 07:42:17.136396 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:18 crc kubenswrapper[5016]: E1011 07:42:18.272841 5016 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 11 07:42:19 crc kubenswrapper[5016]: I1011 07:42:19.133037 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:19 crc kubenswrapper[5016]: E1011 07:42:19.133421 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:19 crc kubenswrapper[5016]: I1011 07:42:19.133709 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:19 crc kubenswrapper[5016]: I1011 07:42:19.133882 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:19 crc kubenswrapper[5016]: I1011 07:42:19.133721 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:19 crc kubenswrapper[5016]: E1011 07:42:19.134008 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:19 crc kubenswrapper[5016]: E1011 07:42:19.134616 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:19 crc kubenswrapper[5016]: E1011 07:42:19.134840 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:20 crc kubenswrapper[5016]: I1011 07:42:20.133351 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:42:20 crc kubenswrapper[5016]: I1011 07:42:20.792748 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/3.log" Oct 11 07:42:20 crc kubenswrapper[5016]: I1011 07:42:20.796097 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerStarted","Data":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} Oct 11 07:42:20 crc kubenswrapper[5016]: I1011 07:42:20.796742 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:42:20 crc kubenswrapper[5016]: I1011 07:42:20.826937 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podStartSLOduration=106.82692218 podStartE2EDuration="1m46.82692218s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:20.826478088 +0000 UTC m=+128.726934034" watchObservedRunningTime="2025-10-11 07:42:20.82692218 +0000 UTC m=+128.727378116" Oct 11 07:42:21 crc kubenswrapper[5016]: I1011 07:42:21.007927 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-459lg"] Oct 11 07:42:21 crc kubenswrapper[5016]: I1011 07:42:21.008049 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:21 crc kubenswrapper[5016]: E1011 07:42:21.008160 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:21 crc kubenswrapper[5016]: I1011 07:42:21.132578 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:21 crc kubenswrapper[5016]: I1011 07:42:21.133221 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:21 crc kubenswrapper[5016]: E1011 07:42:21.133252 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:21 crc kubenswrapper[5016]: E1011 07:42:21.133406 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:21 crc kubenswrapper[5016]: I1011 07:42:21.136526 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:21 crc kubenswrapper[5016]: E1011 07:42:21.136682 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:23 crc kubenswrapper[5016]: I1011 07:42:23.132860 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:23 crc kubenswrapper[5016]: I1011 07:42:23.132949 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:23 crc kubenswrapper[5016]: E1011 07:42:23.133080 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:23 crc kubenswrapper[5016]: I1011 07:42:23.133138 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:23 crc kubenswrapper[5016]: I1011 07:42:23.133162 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:23 crc kubenswrapper[5016]: E1011 07:42:23.135180 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:23 crc kubenswrapper[5016]: E1011 07:42:23.135279 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:23 crc kubenswrapper[5016]: E1011 07:42:23.135396 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:23 crc kubenswrapper[5016]: E1011 07:42:23.274275 5016 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 11 07:42:25 crc kubenswrapper[5016]: I1011 07:42:25.133001 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:25 crc kubenswrapper[5016]: I1011 07:42:25.133056 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:25 crc kubenswrapper[5016]: I1011 07:42:25.133004 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:25 crc kubenswrapper[5016]: I1011 07:42:25.132995 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:25 crc kubenswrapper[5016]: E1011 07:42:25.133146 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:25 crc kubenswrapper[5016]: E1011 07:42:25.133312 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:25 crc kubenswrapper[5016]: E1011 07:42:25.133445 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:25 crc kubenswrapper[5016]: E1011 07:42:25.133515 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:27 crc kubenswrapper[5016]: I1011 07:42:27.132383 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:27 crc kubenswrapper[5016]: E1011 07:42:27.133512 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:27 crc kubenswrapper[5016]: I1011 07:42:27.132449 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:27 crc kubenswrapper[5016]: E1011 07:42:27.133925 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:27 crc kubenswrapper[5016]: I1011 07:42:27.132414 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:27 crc kubenswrapper[5016]: I1011 07:42:27.132480 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:27 crc kubenswrapper[5016]: E1011 07:42:27.134383 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:27 crc kubenswrapper[5016]: E1011 07:42:27.134527 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:28 crc kubenswrapper[5016]: I1011 07:42:28.133146 5016 scope.go:117] "RemoveContainer" containerID="fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e" Oct 11 07:42:28 crc kubenswrapper[5016]: E1011 07:42:28.275860 5016 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 11 07:42:28 crc kubenswrapper[5016]: I1011 07:42:28.822523 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/1.log" Oct 11 07:42:28 crc kubenswrapper[5016]: I1011 07:42:28.822605 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lbbb2" event={"ID":"48e55d9a-f690-40ae-ba16-e91c4d9d3a95","Type":"ContainerStarted","Data":"f3a05795696442d45f03a1ea37c6e6ba23599cdc17efa338b7d62426d4f98771"} Oct 11 07:42:29 crc kubenswrapper[5016]: I1011 07:42:29.133061 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:29 crc kubenswrapper[5016]: I1011 07:42:29.133112 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:29 crc kubenswrapper[5016]: I1011 07:42:29.133186 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:29 crc kubenswrapper[5016]: E1011 07:42:29.133193 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:29 crc kubenswrapper[5016]: I1011 07:42:29.133202 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:29 crc kubenswrapper[5016]: E1011 07:42:29.133277 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:29 crc kubenswrapper[5016]: E1011 07:42:29.133330 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:29 crc kubenswrapper[5016]: E1011 07:42:29.133402 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:31 crc kubenswrapper[5016]: I1011 07:42:31.132784 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:31 crc kubenswrapper[5016]: I1011 07:42:31.132800 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:31 crc kubenswrapper[5016]: I1011 07:42:31.132920 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:31 crc kubenswrapper[5016]: I1011 07:42:31.133852 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:31 crc kubenswrapper[5016]: E1011 07:42:31.134027 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:31 crc kubenswrapper[5016]: E1011 07:42:31.134194 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:31 crc kubenswrapper[5016]: E1011 07:42:31.134341 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:31 crc kubenswrapper[5016]: E1011 07:42:31.134450 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:33 crc kubenswrapper[5016]: I1011 07:42:33.132594 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:33 crc kubenswrapper[5016]: I1011 07:42:33.132597 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:33 crc kubenswrapper[5016]: I1011 07:42:33.132744 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:33 crc kubenswrapper[5016]: E1011 07:42:33.134523 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-459lg" podUID="9ceaf34e-81b3-457f-8f03-d807f795392b" Oct 11 07:42:33 crc kubenswrapper[5016]: I1011 07:42:33.134558 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:33 crc kubenswrapper[5016]: E1011 07:42:33.134719 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 11 07:42:33 crc kubenswrapper[5016]: E1011 07:42:33.134919 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 11 07:42:33 crc kubenswrapper[5016]: E1011 07:42:33.135052 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.132858 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.132890 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.132897 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.133718 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.135523 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.135996 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.136185 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.136267 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.137319 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 11 07:42:35 crc kubenswrapper[5016]: I1011 07:42:35.138951 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 11 07:42:37 crc kubenswrapper[5016]: I1011 07:42:37.122005 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:42:37 crc kubenswrapper[5016]: I1011 07:42:37.122356 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.781330 5016 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.865029 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jp4qx"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.866345 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.869177 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.870124 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.870825 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m5nhn"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.871417 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.872911 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.873333 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.873451 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.873472 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.876018 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwp6t"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.876951 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.877941 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.878188 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.878614 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.878634 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.879002 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.879222 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.879323 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.879451 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.879739 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.879887 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.881009 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.881979 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.883452 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.884168 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.885807 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-znwnv"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.886493 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-znwnv" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.887030 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.888545 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.889632 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.890519 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.891313 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.892986 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fsx9v"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.893625 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894147 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894155 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894331 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894440 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894581 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894614 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894790 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894848 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894905 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.894927 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895022 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895090 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895125 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895191 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895239 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895303 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895398 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895498 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895524 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895571 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895788 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.895976 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.896060 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.896068 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.896260 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.905758 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cz2f7"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.906906 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.907152 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.907429 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.907917 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.908117 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.910396 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.912397 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.913007 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.913254 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.913448 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.913560 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.914812 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.928807 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.942535 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.943544 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.943736 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.944041 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.944403 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.944773 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.944909 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mcbqj"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.945216 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.945245 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.946600 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.947703 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.950930 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sq9kp"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.951627 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.952893 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.953131 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.955740 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.956345 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.956796 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.956828 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.956868 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.957241 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.957355 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.957454 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.957612 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958020 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958072 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958163 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958274 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958297 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958479 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958529 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958587 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958752 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.958965 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.959002 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.959088 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.959162 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.959249 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.965996 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.968046 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vmvvh"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.968744 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.969902 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t5tt6"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.970032 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.970549 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.979336 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.979391 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.979585 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.979676 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.979745 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.979794 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.980022 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.980167 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.980223 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.980262 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.980354 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.980432 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.980639 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.981517 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.981980 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2"] Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.982054 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.982161 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.988544 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.989142 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.989315 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.989481 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.989597 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.989751 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.991358 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 11 07:42:38 crc kubenswrapper[5016]: I1011 07:42:38.992369 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:38.998727 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mhg9"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:38.999397 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.000214 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jp4qx"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.000249 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.000307 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.011916 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.011920 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.014808 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.015137 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.015616 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.015969 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.016179 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.016644 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.016232 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.016809 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.017377 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tq4ks"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.017647 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.017912 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.018103 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m5nhn"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.018955 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.019505 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021262 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b941f947-2402-495b-8808-0f91ab9433e0-trusted-ca\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021300 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/394f8f2b-fe85-414f-ab93-670b5291ac1b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcvc\" (UID: \"394f8f2b-fe85-414f-ab93-670b5291ac1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021328 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-etcd-serving-ca\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021349 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a8034ba-7018-481a-862e-8f21457cc04f-serving-cert\") pod \"openshift-config-operator-7777fb866f-vz5gw\" (UID: \"2a8034ba-7018-481a-862e-8f21457cc04f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021368 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-serving-cert\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021389 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfrd\" (UniqueName: \"kubernetes.io/projected/394f8f2b-fe85-414f-ab93-670b5291ac1b-kube-api-access-znfrd\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcvc\" (UID: \"394f8f2b-fe85-414f-ab93-670b5291ac1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021410 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b5191ad1-f211-45d1-a108-8d45b9d427f6-etcd-ca\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021441 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b3f53591-5cb2-488a-b327-c41c05c5845f-machine-approver-tls\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021462 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b4fc34c-84fa-4a44-a585-61d852838755-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021481 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-image-import-ca\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021501 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-node-pullsecrets\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021522 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c09395-8cfa-4337-8323-0a90e333579a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021543 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-config\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021562 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b4fc34c-84fa-4a44-a585-61d852838755-etcd-client\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021584 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qfq\" (UniqueName: \"kubernetes.io/projected/4b4fc34c-84fa-4a44-a585-61d852838755-kube-api-access-57qfq\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021603 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-serving-cert\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021624 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b4fc34c-84fa-4a44-a585-61d852838755-audit-dir\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021646 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b4fc34c-84fa-4a44-a585-61d852838755-encryption-config\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021716 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394f8f2b-fe85-414f-ab93-670b5291ac1b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcvc\" (UID: \"394f8f2b-fe85-414f-ab93-670b5291ac1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021741 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3f53591-5cb2-488a-b327-c41c05c5845f-config\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021763 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-config\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021787 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-client-ca\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021808 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b941f947-2402-495b-8808-0f91ab9433e0-serving-cert\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021828 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1531c1-e77c-4e97-a216-2809a7566070-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jxdrh\" (UID: \"0f1531c1-e77c-4e97-a216-2809a7566070\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021867 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-etcd-client\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021886 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8424bee6-8168-4c9f-b70e-5523e1990bcd-serving-cert\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021932 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2a8034ba-7018-481a-862e-8f21457cc04f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vz5gw\" (UID: \"2a8034ba-7018-481a-862e-8f21457cc04f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021954 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b4fc34c-84fa-4a44-a585-61d852838755-audit-policies\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021975 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-config\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.021996 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-service-ca-bundle\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022014 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwxf\" (UniqueName: \"kubernetes.io/projected/0f1531c1-e77c-4e97-a216-2809a7566070-kube-api-access-rgwxf\") pod \"cluster-samples-operator-665b6dd947-jxdrh\" (UID: \"0f1531c1-e77c-4e97-a216-2809a7566070\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022035 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/699f1f0c-fc1d-4599-97a8-a135238977b4-serving-cert\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022054 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022072 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzbdj\" (UniqueName: \"kubernetes.io/projected/8424bee6-8168-4c9f-b70e-5523e1990bcd-kube-api-access-lzbdj\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022091 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-config\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022111 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5191ad1-f211-45d1-a108-8d45b9d427f6-etcd-service-ca\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022129 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwc8\" (UniqueName: \"kubernetes.io/projected/63c09395-8cfa-4337-8323-0a90e333579a-kube-api-access-ptwc8\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022147 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5191ad1-f211-45d1-a108-8d45b9d427f6-serving-cert\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022168 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b4fc34c-84fa-4a44-a585-61d852838755-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022199 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022219 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b941f947-2402-495b-8808-0f91ab9433e0-config\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022240 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5l9k\" (UniqueName: \"kubernetes.io/projected/699f1f0c-fc1d-4599-97a8-a135238977b4-kube-api-access-l5l9k\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022259 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5191ad1-f211-45d1-a108-8d45b9d427f6-etcd-client\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022282 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c09395-8cfa-4337-8323-0a90e333579a-config\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022304 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc4l8\" (UniqueName: \"kubernetes.io/projected/b5191ad1-f211-45d1-a108-8d45b9d427f6-kube-api-access-gc4l8\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022341 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/63c09395-8cfa-4337-8323-0a90e333579a-images\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022362 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b3f53591-5cb2-488a-b327-c41c05c5845f-auth-proxy-config\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022381 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022403 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v4s4\" (UniqueName: \"kubernetes.io/projected/b3f53591-5cb2-488a-b327-c41c05c5845f-kube-api-access-6v4s4\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022423 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbgv\" (UniqueName: \"kubernetes.io/projected/fee3401d-bf88-49cd-b228-a4e89c6dd40e-kube-api-access-9hbgv\") pod \"downloads-7954f5f757-znwnv\" (UID: \"fee3401d-bf88-49cd-b228-a4e89c6dd40e\") " pod="openshift-console/downloads-7954f5f757-znwnv" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022444 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-client-ca\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022464 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b4fc34c-84fa-4a44-a585-61d852838755-serving-cert\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022483 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-encryption-config\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022508 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lc5q\" (UniqueName: \"kubernetes.io/projected/2a8034ba-7018-481a-862e-8f21457cc04f-kube-api-access-4lc5q\") pod \"openshift-config-operator-7777fb866f-vz5gw\" (UID: \"2a8034ba-7018-481a-862e-8f21457cc04f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022529 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98hf7\" (UniqueName: \"kubernetes.io/projected/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-kube-api-access-98hf7\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022550 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-audit\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022577 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2bd9\" (UniqueName: \"kubernetes.io/projected/b941f947-2402-495b-8808-0f91ab9433e0-kube-api-access-b2bd9\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022597 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5191ad1-f211-45d1-a108-8d45b9d427f6-config\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022618 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-audit-dir\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.022640 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659tq\" (UniqueName: \"kubernetes.io/projected/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-kube-api-access-659tq\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.023884 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.024578 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.024986 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.025373 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.025451 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.025902 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.026016 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.026023 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.026223 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bblf"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.027013 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.027338 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-twpkv"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.027844 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.030133 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.030204 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.030712 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fsx9v"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.030736 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.030805 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.032269 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.033043 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.033747 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mn4hd"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.034263 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.036534 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-45lst"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.037212 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-45lst" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.037217 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b65rs"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.039500 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.039777 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.044122 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.044169 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwp6t"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.044181 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mcbqj"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.046018 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.048893 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cz2f7"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.048944 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.048955 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.052041 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.052783 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.061369 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.068922 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.069876 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.073556 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t5tt6"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.076318 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sq9kp"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.079333 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.090267 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.091452 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.097807 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.100117 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-znwnv"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.102482 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.103875 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.104995 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.106108 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.107234 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vmvvh"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.108378 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mhg9"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.108647 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.109889 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.111129 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.112134 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-twpkv"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.113272 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.114433 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fj8bb"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.115116 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.115949 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tq4ks"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.117072 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bblf"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.118830 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-g74hb"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.120255 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.120386 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g74hb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.120578 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.121772 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g74hb"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.123384 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.124667 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-45lst"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.124912 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9154515-364c-477a-8471-cf3d40b138b2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dqhld\" (UID: \"a9154515-364c-477a-8471-cf3d40b138b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.124954 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-serving-cert\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125024 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cebf51e6-36da-416a-9f26-d312d6118895-srv-cert\") pod \"catalog-operator-68c6474976-g8p9j\" (UID: \"cebf51e6-36da-416a-9f26-d312d6118895\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125055 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cebf51e6-36da-416a-9f26-d312d6118895-profile-collector-cert\") pod \"catalog-operator-68c6474976-g8p9j\" (UID: \"cebf51e6-36da-416a-9f26-d312d6118895\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125080 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5-srv-cert\") pod \"olm-operator-6b444d44fb-sv2tw\" (UID: \"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125105 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b4fc34c-84fa-4a44-a585-61d852838755-audit-dir\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125128 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsxn\" (UniqueName: \"kubernetes.io/projected/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-kube-api-access-lfsxn\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125153 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b4fc34c-84fa-4a44-a585-61d852838755-encryption-config\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125176 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394f8f2b-fe85-414f-ab93-670b5291ac1b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcvc\" (UID: \"394f8f2b-fe85-414f-ab93-670b5291ac1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125213 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-config\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125238 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3f53591-5cb2-488a-b327-c41c05c5845f-config\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125262 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-client-ca\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125248 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b4fc34c-84fa-4a44-a585-61d852838755-audit-dir\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125284 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-stats-auth\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125365 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsl2b\" (UniqueName: \"kubernetes.io/projected/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-kube-api-access-xsl2b\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125437 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b941f947-2402-495b-8808-0f91ab9433e0-serving-cert\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125511 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-etcd-client\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125546 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1531c1-e77c-4e97-a216-2809a7566070-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jxdrh\" (UID: \"0f1531c1-e77c-4e97-a216-2809a7566070\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125593 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-config-volume\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125620 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/87501447-4d19-45a7-bb18-636d9cec793e-proxy-tls\") pod \"machine-config-controller-84d6567774-52nk2\" (UID: \"87501447-4d19-45a7-bb18-636d9cec793e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125682 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8424bee6-8168-4c9f-b70e-5523e1990bcd-serving-cert\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125715 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-metrics-certs\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125765 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5pqb\" (UniqueName: \"kubernetes.io/projected/c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5-kube-api-access-d5pqb\") pod \"olm-operator-6b444d44fb-sv2tw\" (UID: \"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125796 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-apiservice-cert\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125858 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b4fc34c-84fa-4a44-a585-61d852838755-audit-policies\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125920 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-config\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125956 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87501447-4d19-45a7-bb18-636d9cec793e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-52nk2\" (UID: \"87501447-4d19-45a7-bb18-636d9cec793e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.125993 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394f8f2b-fe85-414f-ab93-670b5291ac1b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcvc\" (UID: \"394f8f2b-fe85-414f-ab93-670b5291ac1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126020 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2a8034ba-7018-481a-862e-8f21457cc04f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vz5gw\" (UID: \"2a8034ba-7018-481a-862e-8f21457cc04f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126051 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krs66\" (UniqueName: \"kubernetes.io/projected/87501447-4d19-45a7-bb18-636d9cec793e-kube-api-access-krs66\") pod \"machine-config-controller-84d6567774-52nk2\" (UID: \"87501447-4d19-45a7-bb18-636d9cec793e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126102 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5v8s\" (UniqueName: \"kubernetes.io/projected/4c0a1c1b-7182-49e6-b027-d766ec67481d-kube-api-access-x5v8s\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126132 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/699f1f0c-fc1d-4599-97a8-a135238977b4-serving-cert\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126188 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-service-ca-bundle\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126214 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwxf\" (UniqueName: \"kubernetes.io/projected/0f1531c1-e77c-4e97-a216-2809a7566070-kube-api-access-rgwxf\") pod \"cluster-samples-operator-665b6dd947-jxdrh\" (UID: \"0f1531c1-e77c-4e97-a216-2809a7566070\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126231 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-config\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126261 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-tmpfs\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126347 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-config\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126378 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126429 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzbdj\" (UniqueName: \"kubernetes.io/projected/8424bee6-8168-4c9f-b70e-5523e1990bcd-kube-api-access-lzbdj\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126457 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptrrx\" (UniqueName: \"kubernetes.io/projected/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-kube-api-access-ptrrx\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126487 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5191ad1-f211-45d1-a108-8d45b9d427f6-etcd-service-ca\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126510 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5191ad1-f211-45d1-a108-8d45b9d427f6-serving-cert\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126537 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sv2tw\" (UID: \"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.126592 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwc8\" (UniqueName: \"kubernetes.io/projected/63c09395-8cfa-4337-8323-0a90e333579a-kube-api-access-ptwc8\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.127175 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3f53591-5cb2-488a-b327-c41c05c5845f-config\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.127380 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-client-ca\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.127970 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-config\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.127982 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-config\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128175 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b4fc34c-84fa-4a44-a585-61d852838755-audit-policies\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128247 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b4fc34c-84fa-4a44-a585-61d852838755-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128367 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128441 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b941f947-2402-495b-8808-0f91ab9433e0-config\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128492 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5l9k\" (UniqueName: \"kubernetes.io/projected/699f1f0c-fc1d-4599-97a8-a135238977b4-kube-api-access-l5l9k\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128520 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5191ad1-f211-45d1-a108-8d45b9d427f6-etcd-client\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128553 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9154515-364c-477a-8471-cf3d40b138b2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dqhld\" (UID: \"a9154515-364c-477a-8471-cf3d40b138b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128577 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-secret-volume\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128601 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-socket-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128629 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c09395-8cfa-4337-8323-0a90e333579a-config\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128677 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc4l8\" (UniqueName: \"kubernetes.io/projected/b5191ad1-f211-45d1-a108-8d45b9d427f6-kube-api-access-gc4l8\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128705 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lb4m\" (UniqueName: \"kubernetes.io/projected/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-kube-api-access-9lb4m\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128751 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-csi-data-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128779 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b3f53591-5cb2-488a-b327-c41c05c5845f-auth-proxy-config\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128801 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c0a1c1b-7182-49e6-b027-d766ec67481d-metrics-tls\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128801 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5191ad1-f211-45d1-a108-8d45b9d427f6-etcd-service-ca\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128824 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9154515-364c-477a-8471-cf3d40b138b2-config\") pod \"kube-apiserver-operator-766d6c64bb-dqhld\" (UID: \"a9154515-364c-477a-8471-cf3d40b138b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128852 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/63c09395-8cfa-4337-8323-0a90e333579a-images\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128877 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v4s4\" (UniqueName: \"kubernetes.io/projected/b3f53591-5cb2-488a-b327-c41c05c5845f-kube-api-access-6v4s4\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128903 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hbgv\" (UniqueName: \"kubernetes.io/projected/fee3401d-bf88-49cd-b228-a4e89c6dd40e-kube-api-access-9hbgv\") pod \"downloads-7954f5f757-znwnv\" (UID: \"fee3401d-bf88-49cd-b228-a4e89c6dd40e\") " pod="openshift-console/downloads-7954f5f757-znwnv" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128927 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.128953 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-mountpoint-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129013 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-encryption-config\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129038 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-client-ca\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129060 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b4fc34c-84fa-4a44-a585-61d852838755-serving-cert\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129087 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-default-certificate\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129109 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-webhook-cert\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129136 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98hf7\" (UniqueName: \"kubernetes.io/projected/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-kube-api-access-98hf7\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129162 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lc5q\" (UniqueName: \"kubernetes.io/projected/2a8034ba-7018-481a-862e-8f21457cc04f-kube-api-access-4lc5q\") pod \"openshift-config-operator-7777fb866f-vz5gw\" (UID: \"2a8034ba-7018-481a-862e-8f21457cc04f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129184 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0a1c1b-7182-49e6-b027-d766ec67481d-config-volume\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129208 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49wqf\" (UniqueName: \"kubernetes.io/projected/cebf51e6-36da-416a-9f26-d312d6118895-kube-api-access-49wqf\") pod \"catalog-operator-68c6474976-g8p9j\" (UID: \"cebf51e6-36da-416a-9f26-d312d6118895\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129258 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-audit\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129282 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-registration-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129304 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-plugins-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129341 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2bd9\" (UniqueName: \"kubernetes.io/projected/b941f947-2402-495b-8808-0f91ab9433e0-kube-api-access-b2bd9\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129364 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5191ad1-f211-45d1-a108-8d45b9d427f6-config\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129390 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-audit-dir\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129414 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-659tq\" (UniqueName: \"kubernetes.io/projected/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-kube-api-access-659tq\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129446 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/394f8f2b-fe85-414f-ab93-670b5291ac1b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcvc\" (UID: \"394f8f2b-fe85-414f-ab93-670b5291ac1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129478 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-etcd-serving-ca\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129501 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b941f947-2402-495b-8808-0f91ab9433e0-trusted-ca\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129526 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a8034ba-7018-481a-862e-8f21457cc04f-serving-cert\") pod \"openshift-config-operator-7777fb866f-vz5gw\" (UID: \"2a8034ba-7018-481a-862e-8f21457cc04f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129548 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b3f53591-5cb2-488a-b327-c41c05c5845f-machine-approver-tls\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129570 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b4fc34c-84fa-4a44-a585-61d852838755-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129594 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-serving-cert\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129619 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znfrd\" (UniqueName: \"kubernetes.io/projected/394f8f2b-fe85-414f-ab93-670b5291ac1b-kube-api-access-znfrd\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcvc\" (UID: \"394f8f2b-fe85-414f-ab93-670b5291ac1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129645 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b5191ad1-f211-45d1-a108-8d45b9d427f6-etcd-ca\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129701 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-image-import-ca\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129721 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b4fc34c-84fa-4a44-a585-61d852838755-etcd-client\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129733 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.129745 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57qfq\" (UniqueName: \"kubernetes.io/projected/4b4fc34c-84fa-4a44-a585-61d852838755-kube-api-access-57qfq\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.130032 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-node-pullsecrets\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.130071 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-service-ca-bundle\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.130103 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c09395-8cfa-4337-8323-0a90e333579a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.130127 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-config\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.130549 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b3f53591-5cb2-488a-b327-c41c05c5845f-auth-proxy-config\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.130772 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b4fc34c-84fa-4a44-a585-61d852838755-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.130875 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b941f947-2402-495b-8808-0f91ab9433e0-config\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.130889 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b65rs"] Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.131063 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.131706 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/63c09395-8cfa-4337-8323-0a90e333579a-images\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.131770 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-config\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.131847 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-node-pullsecrets\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.133057 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2a8034ba-7018-481a-862e-8f21457cc04f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-vz5gw\" (UID: \"2a8034ba-7018-481a-862e-8f21457cc04f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.133116 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.133167 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b941f947-2402-495b-8808-0f91ab9433e0-serving-cert\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.133240 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/699f1f0c-fc1d-4599-97a8-a135238977b4-serving-cert\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.133363 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-audit-dir\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.133843 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.134047 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c09395-8cfa-4337-8323-0a90e333579a-config\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.134047 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b5191ad1-f211-45d1-a108-8d45b9d427f6-etcd-client\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.134121 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5191ad1-f211-45d1-a108-8d45b9d427f6-config\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.134387 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-serving-cert\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.134437 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-client-ca\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.134551 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-etcd-serving-ca\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.134753 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b4fc34c-84fa-4a44-a585-61d852838755-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.135571 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8424bee6-8168-4c9f-b70e-5523e1990bcd-serving-cert\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.135624 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b941f947-2402-495b-8808-0f91ab9433e0-trusted-ca\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.135863 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b4fc34c-84fa-4a44-a585-61d852838755-encryption-config\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.136020 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-audit\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.136292 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b5191ad1-f211-45d1-a108-8d45b9d427f6-etcd-ca\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.136416 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-image-import-ca\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.136543 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-encryption-config\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.136866 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a8034ba-7018-481a-862e-8f21457cc04f-serving-cert\") pod \"openshift-config-operator-7777fb866f-vz5gw\" (UID: \"2a8034ba-7018-481a-862e-8f21457cc04f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.136895 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5191ad1-f211-45d1-a108-8d45b9d427f6-serving-cert\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.137209 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/63c09395-8cfa-4337-8323-0a90e333579a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.137699 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b3f53591-5cb2-488a-b327-c41c05c5845f-machine-approver-tls\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.137719 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b4fc34c-84fa-4a44-a585-61d852838755-serving-cert\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.138879 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-service-ca-bundle\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.139330 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-etcd-client\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.139695 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/394f8f2b-fe85-414f-ab93-670b5291ac1b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcvc\" (UID: \"394f8f2b-fe85-414f-ab93-670b5291ac1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.139700 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b4fc34c-84fa-4a44-a585-61d852838755-etcd-client\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.139872 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-serving-cert\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.140374 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f1531c1-e77c-4e97-a216-2809a7566070-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jxdrh\" (UID: \"0f1531c1-e77c-4e97-a216-2809a7566070\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.150748 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.176193 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.188905 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.229010 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.231734 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-default-certificate\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233202 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-webhook-cert\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233302 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0a1c1b-7182-49e6-b027-d766ec67481d-config-volume\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233327 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49wqf\" (UniqueName: \"kubernetes.io/projected/cebf51e6-36da-416a-9f26-d312d6118895-kube-api-access-49wqf\") pod \"catalog-operator-68c6474976-g8p9j\" (UID: \"cebf51e6-36da-416a-9f26-d312d6118895\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233345 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-plugins-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233364 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-registration-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233615 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-registration-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233760 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-plugins-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233805 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-service-ca-bundle\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233830 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9154515-364c-477a-8471-cf3d40b138b2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dqhld\" (UID: \"a9154515-364c-477a-8471-cf3d40b138b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233851 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5-srv-cert\") pod \"olm-operator-6b444d44fb-sv2tw\" (UID: \"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.233869 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfsxn\" (UniqueName: \"kubernetes.io/projected/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-kube-api-access-lfsxn\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234062 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cebf51e6-36da-416a-9f26-d312d6118895-srv-cert\") pod \"catalog-operator-68c6474976-g8p9j\" (UID: \"cebf51e6-36da-416a-9f26-d312d6118895\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234086 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cebf51e6-36da-416a-9f26-d312d6118895-profile-collector-cert\") pod \"catalog-operator-68c6474976-g8p9j\" (UID: \"cebf51e6-36da-416a-9f26-d312d6118895\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234117 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-stats-auth\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234140 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsl2b\" (UniqueName: \"kubernetes.io/projected/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-kube-api-access-xsl2b\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234163 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-config-volume\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234182 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/87501447-4d19-45a7-bb18-636d9cec793e-proxy-tls\") pod \"machine-config-controller-84d6567774-52nk2\" (UID: \"87501447-4d19-45a7-bb18-636d9cec793e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234207 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5pqb\" (UniqueName: \"kubernetes.io/projected/c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5-kube-api-access-d5pqb\") pod \"olm-operator-6b444d44fb-sv2tw\" (UID: \"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234423 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-apiservice-cert\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234442 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-metrics-certs\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234458 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87501447-4d19-45a7-bb18-636d9cec793e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-52nk2\" (UID: \"87501447-4d19-45a7-bb18-636d9cec793e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234480 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krs66\" (UniqueName: \"kubernetes.io/projected/87501447-4d19-45a7-bb18-636d9cec793e-kube-api-access-krs66\") pod \"machine-config-controller-84d6567774-52nk2\" (UID: \"87501447-4d19-45a7-bb18-636d9cec793e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234496 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5v8s\" (UniqueName: \"kubernetes.io/projected/4c0a1c1b-7182-49e6-b027-d766ec67481d-kube-api-access-x5v8s\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.234518 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-tmpfs\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.235444 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/87501447-4d19-45a7-bb18-636d9cec793e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-52nk2\" (UID: \"87501447-4d19-45a7-bb18-636d9cec793e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.235576 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptrrx\" (UniqueName: \"kubernetes.io/projected/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-kube-api-access-ptrrx\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.235808 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sv2tw\" (UID: \"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236028 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9154515-364c-477a-8471-cf3d40b138b2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dqhld\" (UID: \"a9154515-364c-477a-8471-cf3d40b138b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.235917 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-tmpfs\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236060 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-secret-volume\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236185 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-socket-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236244 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lb4m\" (UniqueName: \"kubernetes.io/projected/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-kube-api-access-9lb4m\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236296 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-csi-data-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236327 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9154515-364c-477a-8471-cf3d40b138b2-config\") pod \"kube-apiserver-operator-766d6c64bb-dqhld\" (UID: \"a9154515-364c-477a-8471-cf3d40b138b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236335 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-socket-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236355 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c0a1c1b-7182-49e6-b027-d766ec67481d-metrics-tls\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236382 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-mountpoint-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236501 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-mountpoint-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.236971 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9154515-364c-477a-8471-cf3d40b138b2-config\") pod \"kube-apiserver-operator-766d6c64bb-dqhld\" (UID: \"a9154515-364c-477a-8471-cf3d40b138b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.237112 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-csi-data-dir\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.238838 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9154515-364c-477a-8471-cf3d40b138b2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-dqhld\" (UID: \"a9154515-364c-477a-8471-cf3d40b138b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.248128 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.258231 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/87501447-4d19-45a7-bb18-636d9cec793e-proxy-tls\") pod \"machine-config-controller-84d6567774-52nk2\" (UID: \"87501447-4d19-45a7-bb18-636d9cec793e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.270095 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.288608 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.308409 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.328323 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.349068 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.368485 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.389084 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.409226 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.428402 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.449357 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.478591 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.495185 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.509634 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.543520 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.548814 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.568194 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.588387 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.609407 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.628834 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.649044 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.669342 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.690169 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.708611 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.728741 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.749332 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.768755 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.789413 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.800575 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5-srv-cert\") pod \"olm-operator-6b444d44fb-sv2tw\" (UID: \"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.809673 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.829851 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.848963 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.859860 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cebf51e6-36da-416a-9f26-d312d6118895-profile-collector-cert\") pod \"catalog-operator-68c6474976-g8p9j\" (UID: \"cebf51e6-36da-416a-9f26-d312d6118895\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.860145 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-secret-volume\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.861646 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-sv2tw\" (UID: \"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.869601 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.889618 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.910401 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.929985 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.948972 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.970747 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 11 07:42:39 crc kubenswrapper[5016]: I1011 07:42:39.990378 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.008881 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.027547 5016 request.go:700] Waited for 1.001827722s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0 Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.029593 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.037208 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-webhook-cert\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.040716 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-apiservice-cert\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.049113 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.057634 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cebf51e6-36da-416a-9f26-d312d6118895-srv-cert\") pod \"catalog-operator-68c6474976-g8p9j\" (UID: \"cebf51e6-36da-416a-9f26-d312d6118895\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.069532 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.088849 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.109121 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.129889 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.148870 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.169050 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.189102 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.208581 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.228774 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.232128 5016 secret.go:188] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.232216 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-default-certificate podName:76f2e3c8-c16d-4a3e-85d9-25cc30605ea0 nodeName:}" failed. No retries permitted until 2025-10-11 07:42:40.732194898 +0000 UTC m=+148.632650854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-default-certificate") pod "router-default-5444994796-mn4hd" (UID: "76f2e3c8-c16d-4a3e-85d9-25cc30605ea0") : failed to sync secret cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.234452 5016 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.234550 5016 secret.go:188] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.234626 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-config-volume podName:a2fca8b5-8ccb-4100-8570-82b07bdae3ee nodeName:}" failed. No retries permitted until 2025-10-11 07:42:40.734598776 +0000 UTC m=+148.635054762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-config-volume") pod "collect-profiles-29336130-mtlhx" (UID: "a2fca8b5-8ccb-4100-8570-82b07bdae3ee") : failed to sync configmap cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.234483 5016 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.234703 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-stats-auth podName:76f2e3c8-c16d-4a3e-85d9-25cc30605ea0 nodeName:}" failed. No retries permitted until 2025-10-11 07:42:40.734641498 +0000 UTC m=+148.635097474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-stats-auth") pod "router-default-5444994796-mn4hd" (UID: "76f2e3c8-c16d-4a3e-85d9-25cc30605ea0") : failed to sync secret cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.234513 5016 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.234822 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-service-ca-bundle podName:76f2e3c8-c16d-4a3e-85d9-25cc30605ea0 nodeName:}" failed. No retries permitted until 2025-10-11 07:42:40.734779821 +0000 UTC m=+148.635235867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-service-ca-bundle") pod "router-default-5444994796-mn4hd" (UID: "76f2e3c8-c16d-4a3e-85d9-25cc30605ea0") : failed to sync configmap cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.234951 5016 secret.go:188] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.234955 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4c0a1c1b-7182-49e6-b027-d766ec67481d-config-volume podName:4c0a1c1b-7182-49e6-b027-d766ec67481d nodeName:}" failed. No retries permitted until 2025-10-11 07:42:40.734932346 +0000 UTC m=+148.635388462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4c0a1c1b-7182-49e6-b027-d766ec67481d-config-volume") pod "dns-default-45lst" (UID: "4c0a1c1b-7182-49e6-b027-d766ec67481d") : failed to sync configmap cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.235083 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-metrics-certs podName:76f2e3c8-c16d-4a3e-85d9-25cc30605ea0 nodeName:}" failed. No retries permitted until 2025-10-11 07:42:40.735032718 +0000 UTC m=+148.635488794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-metrics-certs") pod "router-default-5444994796-mn4hd" (UID: "76f2e3c8-c16d-4a3e-85d9-25cc30605ea0") : failed to sync secret cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.237187 5016 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: E1011 07:42:40.237248 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c0a1c1b-7182-49e6-b027-d766ec67481d-metrics-tls podName:4c0a1c1b-7182-49e6-b027-d766ec67481d nodeName:}" failed. No retries permitted until 2025-10-11 07:42:40.737234981 +0000 UTC m=+148.637690937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4c0a1c1b-7182-49e6-b027-d766ec67481d-metrics-tls") pod "dns-default-45lst" (UID: "4c0a1c1b-7182-49e6-b027-d766ec67481d") : failed to sync secret cache: timed out waiting for the condition Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.260125 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.268886 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.289870 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.308747 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.328728 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.349081 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.369946 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.389512 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.409447 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.429567 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.449592 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.469275 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.489048 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.509767 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.529800 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.549554 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.568547 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.589391 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.608578 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.629391 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.649954 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.670512 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.689300 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.710455 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.729922 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.750346 5016 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.757877 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-stats-auth\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.757973 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-config-volume\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.758010 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-metrics-certs\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.758227 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c0a1c1b-7182-49e6-b027-d766ec67481d-metrics-tls\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.758320 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-default-certificate\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.758399 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0a1c1b-7182-49e6-b027-d766ec67481d-config-volume\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.758561 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-service-ca-bundle\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.760537 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-config-volume\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.760554 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c0a1c1b-7182-49e6-b027-d766ec67481d-config-volume\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.761365 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-service-ca-bundle\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.764016 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4c0a1c1b-7182-49e6-b027-d766ec67481d-metrics-tls\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.764389 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-stats-auth\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.765386 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-default-certificate\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.766729 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-metrics-certs\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.768918 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.810257 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.829114 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.848886 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.869082 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.888921 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.909550 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.928839 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.974289 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwxf\" (UniqueName: \"kubernetes.io/projected/0f1531c1-e77c-4e97-a216-2809a7566070-kube-api-access-rgwxf\") pod \"cluster-samples-operator-665b6dd947-jxdrh\" (UID: \"0f1531c1-e77c-4e97-a216-2809a7566070\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" Oct 11 07:42:40 crc kubenswrapper[5016]: I1011 07:42:40.995540 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwc8\" (UniqueName: \"kubernetes.io/projected/63c09395-8cfa-4337-8323-0a90e333579a-kube-api-access-ptwc8\") pod \"machine-api-operator-5694c8668f-m5nhn\" (UID: \"63c09395-8cfa-4337-8323-0a90e333579a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.010375 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzbdj\" (UniqueName: \"kubernetes.io/projected/8424bee6-8168-4c9f-b70e-5523e1990bcd-kube-api-access-lzbdj\") pod \"controller-manager-879f6c89f-gwp6t\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.028499 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57qfq\" (UniqueName: \"kubernetes.io/projected/4b4fc34c-84fa-4a44-a585-61d852838755-kube-api-access-57qfq\") pod \"apiserver-7bbb656c7d-bhwkr\" (UID: \"4b4fc34c-84fa-4a44-a585-61d852838755\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.046768 5016 request.go:700] Waited for 1.9156115s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.057878 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5l9k\" (UniqueName: \"kubernetes.io/projected/699f1f0c-fc1d-4599-97a8-a135238977b4-kube-api-access-l5l9k\") pod \"route-controller-manager-6576b87f9c-sqrgb\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.057917 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.066109 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.071842 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v4s4\" (UniqueName: \"kubernetes.io/projected/b3f53591-5cb2-488a-b327-c41c05c5845f-kube-api-access-6v4s4\") pod \"machine-approver-56656f9798-4vcjt\" (UID: \"b3f53591-5cb2-488a-b327-c41c05c5845f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.098365 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hbgv\" (UniqueName: \"kubernetes.io/projected/fee3401d-bf88-49cd-b228-a4e89c6dd40e-kube-api-access-9hbgv\") pod \"downloads-7954f5f757-znwnv\" (UID: \"fee3401d-bf88-49cd-b228-a4e89c6dd40e\") " pod="openshift-console/downloads-7954f5f757-znwnv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.098935 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.106771 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2bd9\" (UniqueName: \"kubernetes.io/projected/b941f947-2402-495b-8808-0f91ab9433e0-kube-api-access-b2bd9\") pod \"console-operator-58897d9998-mcbqj\" (UID: \"b941f947-2402-495b-8808-0f91ab9433e0\") " pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.129172 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-659tq\" (UniqueName: \"kubernetes.io/projected/1f0257f9-e7c9-4951-ba43-4ba90a80c1e1-kube-api-access-659tq\") pod \"apiserver-76f77b778f-jp4qx\" (UID: \"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1\") " pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.129857 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-znwnv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.144216 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.148376 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.165546 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc4l8\" (UniqueName: \"kubernetes.io/projected/b5191ad1-f211-45d1-a108-8d45b9d427f6-kube-api-access-gc4l8\") pod \"etcd-operator-b45778765-cz2f7\" (UID: \"b5191ad1-f211-45d1-a108-8d45b9d427f6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.170816 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znfrd\" (UniqueName: \"kubernetes.io/projected/394f8f2b-fe85-414f-ab93-670b5291ac1b-kube-api-access-znfrd\") pod \"openshift-apiserver-operator-796bbdcf4f-4jcvc\" (UID: \"394f8f2b-fe85-414f-ab93-670b5291ac1b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.190708 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98hf7\" (UniqueName: \"kubernetes.io/projected/b2c1d5e1-166e-49ee-8134-5ba60fceaf56-kube-api-access-98hf7\") pod \"authentication-operator-69f744f599-fsx9v\" (UID: \"b2c1d5e1-166e-49ee-8134-5ba60fceaf56\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.192915 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.215242 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lc5q\" (UniqueName: \"kubernetes.io/projected/2a8034ba-7018-481a-862e-8f21457cc04f-kube-api-access-4lc5q\") pod \"openshift-config-operator-7777fb866f-vz5gw\" (UID: \"2a8034ba-7018-481a-862e-8f21457cc04f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.248853 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49wqf\" (UniqueName: \"kubernetes.io/projected/cebf51e6-36da-416a-9f26-d312d6118895-kube-api-access-49wqf\") pod \"catalog-operator-68c6474976-g8p9j\" (UID: \"cebf51e6-36da-416a-9f26-d312d6118895\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.265571 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a9154515-364c-477a-8471-cf3d40b138b2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-dqhld\" (UID: \"a9154515-364c-477a-8471-cf3d40b138b2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.294031 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5pqb\" (UniqueName: \"kubernetes.io/projected/c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5-kube-api-access-d5pqb\") pod \"olm-operator-6b444d44fb-sv2tw\" (UID: \"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.305513 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsl2b\" (UniqueName: \"kubernetes.io/projected/76f2e3c8-c16d-4a3e-85d9-25cc30605ea0-kube-api-access-xsl2b\") pod \"router-default-5444994796-mn4hd\" (UID: \"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0\") " pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.307265 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.310509 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.332071 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfsxn\" (UniqueName: \"kubernetes.io/projected/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-kube-api-access-lfsxn\") pod \"collect-profiles-29336130-mtlhx\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.345307 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwp6t"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.353136 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.354519 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krs66\" (UniqueName: \"kubernetes.io/projected/87501447-4d19-45a7-bb18-636d9cec793e-kube-api-access-krs66\") pod \"machine-config-controller-84d6567774-52nk2\" (UID: \"87501447-4d19-45a7-bb18-636d9cec793e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.362798 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5v8s\" (UniqueName: \"kubernetes.io/projected/4c0a1c1b-7182-49e6-b027-d766ec67481d-kube-api-access-x5v8s\") pod \"dns-default-45lst\" (UID: \"4c0a1c1b-7182-49e6-b027-d766ec67481d\") " pod="openshift-dns/dns-default-45lst" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.382645 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.382979 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.386229 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptrrx\" (UniqueName: \"kubernetes.io/projected/319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2-kube-api-access-ptrrx\") pod \"csi-hostpathplugin-b65rs\" (UID: \"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2\") " pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.404499 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lb4m\" (UniqueName: \"kubernetes.io/projected/76eba8ae-c2d4-49ac-9cb1-4e7256c144ff-kube-api-access-9lb4m\") pod \"packageserver-d55dfcdfc-psqvq\" (UID: \"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.407076 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.416510 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.420826 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.422129 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-45lst" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.455723 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-b65rs" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.456961 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.465499 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" Oct 11 07:42:41 crc kubenswrapper[5016]: W1011 07:42:41.467150 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76f2e3c8_c16d_4a3e_85d9_25cc30605ea0.slice/crio-9383c1624e5b341517984c811b1fc0f9d7b89f88bd62f1d9c4f19e075f1c11b3 WatchSource:0}: Error finding container 9383c1624e5b341517984c811b1fc0f9d7b89f88bd62f1d9c4f19e075f1c11b3: Status 404 returned error can't find the container with id 9383c1624e5b341517984c811b1fc0f9d7b89f88bd62f1d9c4f19e075f1c11b3 Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.467833 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrjd\" (UniqueName: \"kubernetes.io/projected/56ca2fcf-3f5b-4074-9d84-12f089e816a9-kube-api-access-vqrjd\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.467895 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7c212d37-a525-4cf4-a484-1a719dc3237d-metrics-tls\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.467941 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c212d37-a525-4cf4-a484-1a719dc3237d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.468027 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b1ed3ee2-21f7-4552-97ae-1524d469aa1a-signing-cabundle\") pod \"service-ca-9c57cc56f-twpkv\" (UID: \"b1ed3ee2-21f7-4552-97ae-1524d469aa1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.468822 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5bblf\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.468850 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-584xz\" (UniqueName: \"kubernetes.io/projected/d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff-kube-api-access-584xz\") pod \"service-ca-operator-777779d784-5c8fg\" (UID: \"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.468875 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6352f37-0c6d-4ec1-961b-2d46944fd666-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dh44f\" (UID: \"d6352f37-0c6d-4ec1-961b-2d46944fd666\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.468923 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50901a7-482c-401b-96cf-cf925c66e918-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xsjnn\" (UID: \"e50901a7-482c-401b-96cf-cf925c66e918\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.468940 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5jq2\" (UniqueName: \"kubernetes.io/projected/0b04b8b6-3686-4217-b79b-374396ed61ec-kube-api-access-w5jq2\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.468957 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-serving-cert\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.468995 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrghg\" (UniqueName: \"kubernetes.io/projected/12dfb419-e03a-48b3-b448-225f83bd8de3-kube-api-access-wrghg\") pod \"marketplace-operator-79b997595-5bblf\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469021 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5bblf\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469042 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/56ca2fcf-3f5b-4074-9d84-12f089e816a9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469060 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qsrl\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-kube-api-access-8qsrl\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469083 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdrpt\" (UniqueName: \"kubernetes.io/projected/b1ed3ee2-21f7-4552-97ae-1524d469aa1a-kube-api-access-kdrpt\") pod \"service-ca-9c57cc56f-twpkv\" (UID: \"b1ed3ee2-21f7-4552-97ae-1524d469aa1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469122 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-z44tx\" (UID: \"4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469139 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469155 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469180 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-certificates\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469206 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9qls\" (UniqueName: \"kubernetes.io/projected/7c212d37-a525-4cf4-a484-1a719dc3237d-kube-api-access-r9qls\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469227 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-service-ca\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469251 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-trusted-ca-bundle\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469272 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469312 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469333 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-oauth-config\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469354 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-policies\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469388 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56ca2fcf-3f5b-4074-9d84-12f089e816a9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469404 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-config\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469418 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469435 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ddda55d-dea4-490a-bdc6-a004fb25358c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k5jkz\" (UID: \"5ddda55d-dea4-490a-bdc6-a004fb25358c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469451 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/548549e8-8855-421a-95d7-f57b74ae500a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tq4ks\" (UID: \"548549e8-8855-421a-95d7-f57b74ae500a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469467 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-tls\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469483 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469518 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c212d37-a525-4cf4-a484-1a719dc3237d-trusted-ca\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469534 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469557 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-oauth-serving-cert\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469575 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469591 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56ca2fcf-3f5b-4074-9d84-12f089e816a9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469607 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6352f37-0c6d-4ec1-961b-2d46944fd666-config\") pod \"kube-controller-manager-operator-78b949d7b-dh44f\" (UID: \"d6352f37-0c6d-4ec1-961b-2d46944fd666\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469622 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdwd8\" (UniqueName: \"kubernetes.io/projected/5ddda55d-dea4-490a-bdc6-a004fb25358c-kube-api-access-kdwd8\") pod \"kube-storage-version-migrator-operator-b67b599dd-k5jkz\" (UID: \"5ddda55d-dea4-490a-bdc6-a004fb25358c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469675 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjthb\" (UniqueName: \"kubernetes.io/projected/4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5-kube-api-access-xjthb\") pod \"package-server-manager-789f6589d5-z44tx\" (UID: \"4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469691 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddda55d-dea4-490a-bdc6-a004fb25358c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k5jkz\" (UID: \"5ddda55d-dea4-490a-bdc6-a004fb25358c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469705 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6352f37-0c6d-4ec1-961b-2d46944fd666-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dh44f\" (UID: \"d6352f37-0c6d-4ec1-961b-2d46944fd666\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469730 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj8v8\" (UniqueName: \"kubernetes.io/projected/fee2cde4-61cd-42ab-a6c6-8dbe8e5a123e-kube-api-access-fj8v8\") pod \"migrator-59844c95c7-6n96l\" (UID: \"fee2cde4-61cd-42ab-a6c6-8dbe8e5a123e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469745 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2ab53702-6616-46f5-aa33-fe13d748abb2-images\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469781 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ab53702-6616-46f5-aa33-fe13d748abb2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469800 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469816 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b1ed3ee2-21f7-4552-97ae-1524d469aa1a-signing-key\") pod \"service-ca-9c57cc56f-twpkv\" (UID: \"b1ed3ee2-21f7-4552-97ae-1524d469aa1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469847 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469864 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e14139c-7a42-440e-b494-f2a6283a1acd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-msb9j\" (UID: \"6e14139c-7a42-440e-b494-f2a6283a1acd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469878 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff-config\") pod \"service-ca-operator-777779d784-5c8fg\" (UID: \"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.469917 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.471453 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48zc\" (UniqueName: \"kubernetes.io/projected/a64d3a16-dcff-45cd-b0ff-c783d34728c8-kube-api-access-d48zc\") pod \"dns-operator-744455d44c-t5tt6\" (UID: \"a64d3a16-dcff-45cd-b0ff-c783d34728c8\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.471454 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.471553 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.471754 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-bound-sa-token\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.471780 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72g68\" (UniqueName: \"kubernetes.io/projected/e50901a7-482c-401b-96cf-cf925c66e918-kube-api-access-72g68\") pod \"openshift-controller-manager-operator-756b6f6bc6-xsjnn\" (UID: \"e50901a7-482c-401b-96cf-cf925c66e918\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.471896 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472019 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87479dc-a437-4c51-8d14-5f6ef03f3220-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hvh49\" (UID: \"d87479dc-a437-4c51-8d14-5f6ef03f3220\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472164 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ab53702-6616-46f5-aa33-fe13d748abb2-proxy-tls\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472195 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm66b\" (UniqueName: \"kubernetes.io/projected/eb6630cb-0062-4461-bf51-c45f7e4e7478-kube-api-access-rm66b\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472220 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87479dc-a437-4c51-8d14-5f6ef03f3220-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hvh49\" (UID: \"d87479dc-a437-4c51-8d14-5f6ef03f3220\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472243 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-trusted-ca\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472263 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6zfs\" (UniqueName: \"kubernetes.io/projected/548549e8-8855-421a-95d7-f57b74ae500a-kube-api-access-w6zfs\") pod \"multus-admission-controller-857f4d67dd-tq4ks\" (UID: \"548549e8-8855-421a-95d7-f57b74ae500a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472281 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjmh\" (UniqueName: \"kubernetes.io/projected/2ab53702-6616-46f5-aa33-fe13d748abb2-kube-api-access-xcjmh\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: E1011 07:42:41.472320 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:41.972307776 +0000 UTC m=+149.872763712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472343 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-dir\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472566 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50901a7-482c-401b-96cf-cf925c66e918-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xsjnn\" (UID: \"e50901a7-482c-401b-96cf-cf925c66e918\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472610 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff-serving-cert\") pod \"service-ca-operator-777779d784-5c8fg\" (UID: \"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472633 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a64d3a16-dcff-45cd-b0ff-c783d34728c8-metrics-tls\") pod \"dns-operator-744455d44c-t5tt6\" (UID: \"a64d3a16-dcff-45cd-b0ff-c783d34728c8\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472674 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472894 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d87479dc-a437-4c51-8d14-5f6ef03f3220-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hvh49\" (UID: \"d87479dc-a437-4c51-8d14-5f6ef03f3220\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.472951 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjd2s\" (UniqueName: \"kubernetes.io/projected/6e14139c-7a42-440e-b494-f2a6283a1acd-kube-api-access-gjd2s\") pod \"control-plane-machine-set-operator-78cbb6b69f-msb9j\" (UID: \"6e14139c-7a42-440e-b494-f2a6283a1acd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.569440 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.573387 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.574771 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575294 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-z44tx\" (UID: \"4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575321 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575358 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575386 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-certificates\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575432 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9qls\" (UniqueName: \"kubernetes.io/projected/7c212d37-a525-4cf4-a484-1a719dc3237d-kube-api-access-r9qls\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575447 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-service-ca\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575514 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-trusted-ca-bundle\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575530 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575577 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-oauth-config\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575594 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-policies\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575609 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575671 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-config\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575687 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575705 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ddda55d-dea4-490a-bdc6-a004fb25358c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k5jkz\" (UID: \"5ddda55d-dea4-490a-bdc6-a004fb25358c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575747 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56ca2fcf-3f5b-4074-9d84-12f089e816a9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575761 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/548549e8-8855-421a-95d7-f57b74ae500a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tq4ks\" (UID: \"548549e8-8855-421a-95d7-f57b74ae500a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.575783 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-tls\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577094 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577142 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c212d37-a525-4cf4-a484-1a719dc3237d-trusted-ca\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577160 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577203 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-oauth-serving-cert\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577225 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577241 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6352f37-0c6d-4ec1-961b-2d46944fd666-config\") pod \"kube-controller-manager-operator-78b949d7b-dh44f\" (UID: \"d6352f37-0c6d-4ec1-961b-2d46944fd666\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577256 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdwd8\" (UniqueName: \"kubernetes.io/projected/5ddda55d-dea4-490a-bdc6-a004fb25358c-kube-api-access-kdwd8\") pod \"kube-storage-version-migrator-operator-b67b599dd-k5jkz\" (UID: \"5ddda55d-dea4-490a-bdc6-a004fb25358c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577281 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56ca2fcf-3f5b-4074-9d84-12f089e816a9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577321 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/13c6ab26-b0c2-4a8f-aada-8845d88c408a-node-bootstrap-token\") pod \"machine-config-server-fj8bb\" (UID: \"13c6ab26-b0c2-4a8f-aada-8845d88c408a\") " pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577358 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjthb\" (UniqueName: \"kubernetes.io/projected/4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5-kube-api-access-xjthb\") pod \"package-server-manager-789f6589d5-z44tx\" (UID: \"4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577373 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddda55d-dea4-490a-bdc6-a004fb25358c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k5jkz\" (UID: \"5ddda55d-dea4-490a-bdc6-a004fb25358c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577390 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6352f37-0c6d-4ec1-961b-2d46944fd666-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dh44f\" (UID: \"d6352f37-0c6d-4ec1-961b-2d46944fd666\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577425 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj8v8\" (UniqueName: \"kubernetes.io/projected/fee2cde4-61cd-42ab-a6c6-8dbe8e5a123e-kube-api-access-fj8v8\") pod \"migrator-59844c95c7-6n96l\" (UID: \"fee2cde4-61cd-42ab-a6c6-8dbe8e5a123e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577441 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2ab53702-6616-46f5-aa33-fe13d748abb2-images\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577461 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ab53702-6616-46f5-aa33-fe13d748abb2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577487 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577502 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d861cd75-fbdc-403a-8538-903cd646ce64-cert\") pod \"ingress-canary-g74hb\" (UID: \"d861cd75-fbdc-403a-8538-903cd646ce64\") " pod="openshift-ingress-canary/ingress-canary-g74hb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577535 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b1ed3ee2-21f7-4552-97ae-1524d469aa1a-signing-key\") pod \"service-ca-9c57cc56f-twpkv\" (UID: \"b1ed3ee2-21f7-4552-97ae-1524d469aa1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577569 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577606 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e14139c-7a42-440e-b494-f2a6283a1acd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-msb9j\" (UID: \"6e14139c-7a42-440e-b494-f2a6283a1acd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577623 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff-config\") pod \"service-ca-operator-777779d784-5c8fg\" (UID: \"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577638 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv8rd\" (UniqueName: \"kubernetes.io/projected/13c6ab26-b0c2-4a8f-aada-8845d88c408a-kube-api-access-qv8rd\") pod \"machine-config-server-fj8bb\" (UID: \"13c6ab26-b0c2-4a8f-aada-8845d88c408a\") " pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577679 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577706 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48zc\" (UniqueName: \"kubernetes.io/projected/a64d3a16-dcff-45cd-b0ff-c783d34728c8-kube-api-access-d48zc\") pod \"dns-operator-744455d44c-t5tt6\" (UID: \"a64d3a16-dcff-45cd-b0ff-c783d34728c8\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577732 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577757 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-bound-sa-token\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577772 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72g68\" (UniqueName: \"kubernetes.io/projected/e50901a7-482c-401b-96cf-cf925c66e918-kube-api-access-72g68\") pod \"openshift-controller-manager-operator-756b6f6bc6-xsjnn\" (UID: \"e50901a7-482c-401b-96cf-cf925c66e918\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577806 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87479dc-a437-4c51-8d14-5f6ef03f3220-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hvh49\" (UID: \"d87479dc-a437-4c51-8d14-5f6ef03f3220\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577839 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ab53702-6616-46f5-aa33-fe13d748abb2-proxy-tls\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577855 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm66b\" (UniqueName: \"kubernetes.io/projected/eb6630cb-0062-4461-bf51-c45f7e4e7478-kube-api-access-rm66b\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577887 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87479dc-a437-4c51-8d14-5f6ef03f3220-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hvh49\" (UID: \"d87479dc-a437-4c51-8d14-5f6ef03f3220\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577920 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6zfs\" (UniqueName: \"kubernetes.io/projected/548549e8-8855-421a-95d7-f57b74ae500a-kube-api-access-w6zfs\") pod \"multus-admission-controller-857f4d67dd-tq4ks\" (UID: \"548549e8-8855-421a-95d7-f57b74ae500a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.577936 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcjmh\" (UniqueName: \"kubernetes.io/projected/2ab53702-6616-46f5-aa33-fe13d748abb2-kube-api-access-xcjmh\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578288 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-dir\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578306 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/13c6ab26-b0c2-4a8f-aada-8845d88c408a-certs\") pod \"machine-config-server-fj8bb\" (UID: \"13c6ab26-b0c2-4a8f-aada-8845d88c408a\") " pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578325 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-trusted-ca\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578341 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50901a7-482c-401b-96cf-cf925c66e918-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xsjnn\" (UID: \"e50901a7-482c-401b-96cf-cf925c66e918\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578357 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff-serving-cert\") pod \"service-ca-operator-777779d784-5c8fg\" (UID: \"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578371 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a64d3a16-dcff-45cd-b0ff-c783d34728c8-metrics-tls\") pod \"dns-operator-744455d44c-t5tt6\" (UID: \"a64d3a16-dcff-45cd-b0ff-c783d34728c8\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578387 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578439 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d87479dc-a437-4c51-8d14-5f6ef03f3220-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hvh49\" (UID: \"d87479dc-a437-4c51-8d14-5f6ef03f3220\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578455 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjd2s\" (UniqueName: \"kubernetes.io/projected/6e14139c-7a42-440e-b494-f2a6283a1acd-kube-api-access-gjd2s\") pod \"control-plane-machine-set-operator-78cbb6b69f-msb9j\" (UID: \"6e14139c-7a42-440e-b494-f2a6283a1acd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578470 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c212d37-a525-4cf4-a484-1a719dc3237d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578488 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrjd\" (UniqueName: \"kubernetes.io/projected/56ca2fcf-3f5b-4074-9d84-12f089e816a9-kube-api-access-vqrjd\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578503 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7c212d37-a525-4cf4-a484-1a719dc3237d-metrics-tls\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578537 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b1ed3ee2-21f7-4552-97ae-1524d469aa1a-signing-cabundle\") pod \"service-ca-9c57cc56f-twpkv\" (UID: \"b1ed3ee2-21f7-4552-97ae-1524d469aa1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578552 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5bblf\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578576 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-584xz\" (UniqueName: \"kubernetes.io/projected/d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff-kube-api-access-584xz\") pod \"service-ca-operator-777779d784-5c8fg\" (UID: \"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578577 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-trusted-ca-bundle\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578600 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6352f37-0c6d-4ec1-961b-2d46944fd666-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dh44f\" (UID: \"d6352f37-0c6d-4ec1-961b-2d46944fd666\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578691 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50901a7-482c-401b-96cf-cf925c66e918-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xsjnn\" (UID: \"e50901a7-482c-401b-96cf-cf925c66e918\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578719 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5jq2\" (UniqueName: \"kubernetes.io/projected/0b04b8b6-3686-4217-b79b-374396ed61ec-kube-api-access-w5jq2\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578762 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-serving-cert\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578785 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrghg\" (UniqueName: \"kubernetes.io/projected/12dfb419-e03a-48b3-b448-225f83bd8de3-kube-api-access-wrghg\") pod \"marketplace-operator-79b997595-5bblf\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578806 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txx8k\" (UniqueName: \"kubernetes.io/projected/d861cd75-fbdc-403a-8538-903cd646ce64-kube-api-access-txx8k\") pod \"ingress-canary-g74hb\" (UID: \"d861cd75-fbdc-403a-8538-903cd646ce64\") " pod="openshift-ingress-canary/ingress-canary-g74hb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578829 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5bblf\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578876 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/56ca2fcf-3f5b-4074-9d84-12f089e816a9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578900 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qsrl\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-kube-api-access-8qsrl\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.578918 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdrpt\" (UniqueName: \"kubernetes.io/projected/b1ed3ee2-21f7-4552-97ae-1524d469aa1a-kube-api-access-kdrpt\") pod \"service-ca-9c57cc56f-twpkv\" (UID: \"b1ed3ee2-21f7-4552-97ae-1524d469aa1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: E1011 07:42:41.579148 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.07913596 +0000 UTC m=+149.979591896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.579356 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.581646 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m5nhn"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.582715 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-policies\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.584513 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-oauth-config\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.584573 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-dir\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.584714 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c212d37-a525-4cf4-a484-1a719dc3237d-trusted-ca\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.585272 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff-config\") pod \"service-ca-operator-777779d784-5c8fg\" (UID: \"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.587351 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.589216 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ddda55d-dea4-490a-bdc6-a004fb25358c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k5jkz\" (UID: \"5ddda55d-dea4-490a-bdc6-a004fb25358c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.589817 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-config\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.590048 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d87479dc-a437-4c51-8d14-5f6ef03f3220-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hvh49\" (UID: \"d87479dc-a437-4c51-8d14-5f6ef03f3220\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.591043 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.592770 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-trusted-ca\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.608939 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6352f37-0c6d-4ec1-961b-2d46944fd666-config\") pod \"kube-controller-manager-operator-78b949d7b-dh44f\" (UID: \"d6352f37-0c6d-4ec1-961b-2d46944fd666\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.609010 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.609133 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e14139c-7a42-440e-b494-f2a6283a1acd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-msb9j\" (UID: \"6e14139c-7a42-440e-b494-f2a6283a1acd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.611894 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ddda55d-dea4-490a-bdc6-a004fb25358c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k5jkz\" (UID: \"5ddda55d-dea4-490a-bdc6-a004fb25358c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.612358 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-tls\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.612539 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-service-ca\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.613162 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-serving-cert\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.614967 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-certificates\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.617863 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-oauth-serving-cert\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.617962 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ab53702-6616-46f5-aa33-fe13d748abb2-proxy-tls\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.618365 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50901a7-482c-401b-96cf-cf925c66e918-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-xsjnn\" (UID: \"e50901a7-482c-401b-96cf-cf925c66e918\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.618591 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.618617 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5bblf\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.619137 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.619306 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5bblf\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.621529 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50901a7-482c-401b-96cf-cf925c66e918-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-xsjnn\" (UID: \"e50901a7-482c-401b-96cf-cf925c66e918\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.621680 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b1ed3ee2-21f7-4552-97ae-1524d469aa1a-signing-cabundle\") pod \"service-ca-9c57cc56f-twpkv\" (UID: \"b1ed3ee2-21f7-4552-97ae-1524d469aa1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.622091 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.622324 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56ca2fcf-3f5b-4074-9d84-12f089e816a9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.622775 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7c212d37-a525-4cf4-a484-1a719dc3237d-metrics-tls\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.623388 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2ab53702-6616-46f5-aa33-fe13d748abb2-images\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.624232 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.624775 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.625148 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.627506 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ab53702-6616-46f5-aa33-fe13d748abb2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.627862 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.628009 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.630301 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/56ca2fcf-3f5b-4074-9d84-12f089e816a9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.630502 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.630635 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d6352f37-0c6d-4ec1-961b-2d46944fd666-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-dh44f\" (UID: \"d6352f37-0c6d-4ec1-961b-2d46944fd666\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.631072 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d87479dc-a437-4c51-8d14-5f6ef03f3220-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hvh49\" (UID: \"d87479dc-a437-4c51-8d14-5f6ef03f3220\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.631260 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b1ed3ee2-21f7-4552-97ae-1524d469aa1a-signing-key\") pod \"service-ca-9c57cc56f-twpkv\" (UID: \"b1ed3ee2-21f7-4552-97ae-1524d469aa1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.632374 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6352f37-0c6d-4ec1-961b-2d46944fd666-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-dh44f\" (UID: \"d6352f37-0c6d-4ec1-961b-2d46944fd666\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.634314 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a64d3a16-dcff-45cd-b0ff-c783d34728c8-metrics-tls\") pod \"dns-operator-744455d44c-t5tt6\" (UID: \"a64d3a16-dcff-45cd-b0ff-c783d34728c8\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.635164 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.639178 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff-serving-cert\") pod \"service-ca-operator-777779d784-5c8fg\" (UID: \"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.646354 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-znwnv"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.648403 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-z44tx\" (UID: \"4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.651342 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.652262 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/548549e8-8855-421a-95d7-f57b74ae500a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-tq4ks\" (UID: \"548549e8-8855-421a-95d7-f57b74ae500a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.655187 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdrpt\" (UniqueName: \"kubernetes.io/projected/b1ed3ee2-21f7-4552-97ae-1524d469aa1a-kube-api-access-kdrpt\") pod \"service-ca-9c57cc56f-twpkv\" (UID: \"b1ed3ee2-21f7-4552-97ae-1524d469aa1a\") " pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.666292 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.666955 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.671149 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcjmh\" (UniqueName: \"kubernetes.io/projected/2ab53702-6616-46f5-aa33-fe13d748abb2-kube-api-access-xcjmh\") pod \"machine-config-operator-74547568cd-zftpc\" (UID: \"2ab53702-6616-46f5-aa33-fe13d748abb2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.683137 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/13c6ab26-b0c2-4a8f-aada-8845d88c408a-node-bootstrap-token\") pod \"machine-config-server-fj8bb\" (UID: \"13c6ab26-b0c2-4a8f-aada-8845d88c408a\") " pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.684917 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/56ca2fcf-3f5b-4074-9d84-12f089e816a9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.684978 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d861cd75-fbdc-403a-8538-903cd646ce64-cert\") pod \"ingress-canary-g74hb\" (UID: \"d861cd75-fbdc-403a-8538-903cd646ce64\") " pod="openshift-ingress-canary/ingress-canary-g74hb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.685207 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv8rd\" (UniqueName: \"kubernetes.io/projected/13c6ab26-b0c2-4a8f-aada-8845d88c408a-kube-api-access-qv8rd\") pod \"machine-config-server-fj8bb\" (UID: \"13c6ab26-b0c2-4a8f-aada-8845d88c408a\") " pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.685314 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.685388 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/13c6ab26-b0c2-4a8f-aada-8845d88c408a-certs\") pod \"machine-config-server-fj8bb\" (UID: \"13c6ab26-b0c2-4a8f-aada-8845d88c408a\") " pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.685546 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txx8k\" (UniqueName: \"kubernetes.io/projected/d861cd75-fbdc-403a-8538-903cd646ce64-kube-api-access-txx8k\") pod \"ingress-canary-g74hb\" (UID: \"d861cd75-fbdc-403a-8538-903cd646ce64\") " pod="openshift-ingress-canary/ingress-canary-g74hb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.686813 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d861cd75-fbdc-403a-8538-903cd646ce64-cert\") pod \"ingress-canary-g74hb\" (UID: \"d861cd75-fbdc-403a-8538-903cd646ce64\") " pod="openshift-ingress-canary/ingress-canary-g74hb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.686815 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/13c6ab26-b0c2-4a8f-aada-8845d88c408a-node-bootstrap-token\") pod \"machine-config-server-fj8bb\" (UID: \"13c6ab26-b0c2-4a8f-aada-8845d88c408a\") " pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:41 crc kubenswrapper[5016]: E1011 07:42:41.687271 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.187192321 +0000 UTC m=+150.087648267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.688415 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/13c6ab26-b0c2-4a8f-aada-8845d88c408a-certs\") pod \"machine-config-server-fj8bb\" (UID: \"13c6ab26-b0c2-4a8f-aada-8845d88c408a\") " pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.695112 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.707580 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9qls\" (UniqueName: \"kubernetes.io/projected/7c212d37-a525-4cf4-a484-1a719dc3237d-kube-api-access-r9qls\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: W1011 07:42:41.723694 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfee3401d_bf88_49cd_b228_a4e89c6dd40e.slice/crio-4b0ed1a38ad8f3c2e8aa7a1ff014ddfa636a405a256e70cb15a12d1279a0e851 WatchSource:0}: Error finding container 4b0ed1a38ad8f3c2e8aa7a1ff014ddfa636a405a256e70cb15a12d1279a0e851: Status 404 returned error can't find the container with id 4b0ed1a38ad8f3c2e8aa7a1ff014ddfa636a405a256e70cb15a12d1279a0e851 Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.751850 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-bound-sa-token\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.754104 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jp4qx"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.756141 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mcbqj"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.756166 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48zc\" (UniqueName: \"kubernetes.io/projected/a64d3a16-dcff-45cd-b0ff-c783d34728c8-kube-api-access-d48zc\") pod \"dns-operator-744455d44c-t5tt6\" (UID: \"a64d3a16-dcff-45cd-b0ff-c783d34728c8\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.774529 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72g68\" (UniqueName: \"kubernetes.io/projected/e50901a7-482c-401b-96cf-cf925c66e918-kube-api-access-72g68\") pod \"openshift-controller-manager-operator-756b6f6bc6-xsjnn\" (UID: \"e50901a7-482c-401b-96cf-cf925c66e918\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: W1011 07:42:41.782542 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f0257f9_e7c9_4951_ba43_4ba90a80c1e1.slice/crio-ed032b9943e769428250cbfa97496b9f451a789c659c8069ad3214fd77f4c01e WatchSource:0}: Error finding container ed032b9943e769428250cbfa97496b9f451a789c659c8069ad3214fd77f4c01e: Status 404 returned error can't find the container with id ed032b9943e769428250cbfa97496b9f451a789c659c8069ad3214fd77f4c01e Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.787130 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5jq2\" (UniqueName: \"kubernetes.io/projected/0b04b8b6-3686-4217-b79b-374396ed61ec-kube-api-access-w5jq2\") pod \"oauth-openshift-558db77b4-6mhg9\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.787864 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:41 crc kubenswrapper[5016]: E1011 07:42:41.788277 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.288263203 +0000 UTC m=+150.188719139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.788883 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc"] Oct 11 07:42:41 crc kubenswrapper[5016]: W1011 07:42:41.793541 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb941f947_2402_495b_8808_0f91ab9433e0.slice/crio-b7434ac6e6e0287af41d2a46def69811fe174b69e7ea1d04933d047de4e1392b WatchSource:0}: Error finding container b7434ac6e6e0287af41d2a46def69811fe174b69e7ea1d04933d047de4e1392b: Status 404 returned error can't find the container with id b7434ac6e6e0287af41d2a46def69811fe174b69e7ea1d04933d047de4e1392b Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.793735 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.799778 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.804548 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjthb\" (UniqueName: \"kubernetes.io/projected/4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5-kube-api-access-xjthb\") pod \"package-server-manager-789f6589d5-z44tx\" (UID: \"4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:42:41 crc kubenswrapper[5016]: W1011 07:42:41.818673 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod394f8f2b_fe85_414f_ab93_670b5291ac1b.slice/crio-fa8f00e0a242deeeda2149f45119ce066afabcf682da8a8395af81e8300efcfd WatchSource:0}: Error finding container fa8f00e0a242deeeda2149f45119ce066afabcf682da8a8395af81e8300efcfd: Status 404 returned error can't find the container with id fa8f00e0a242deeeda2149f45119ce066afabcf682da8a8395af81e8300efcfd Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.819240 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.831068 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.842342 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c212d37-a525-4cf4-a484-1a719dc3237d-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qj85m\" (UID: \"7c212d37-a525-4cf4-a484-1a719dc3237d\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.846958 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdwd8\" (UniqueName: \"kubernetes.io/projected/5ddda55d-dea4-490a-bdc6-a004fb25358c-kube-api-access-kdwd8\") pod \"kube-storage-version-migrator-operator-b67b599dd-k5jkz\" (UID: \"5ddda55d-dea4-490a-bdc6-a004fb25358c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.876854 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrjd\" (UniqueName: \"kubernetes.io/projected/56ca2fcf-3f5b-4074-9d84-12f089e816a9-kube-api-access-vqrjd\") pod \"cluster-image-registry-operator-dc59b4c8b-d5bvg\" (UID: \"56ca2fcf-3f5b-4074-9d84-12f089e816a9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.881705 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.884345 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qsrl\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-kube-api-access-8qsrl\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.889286 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:41 crc kubenswrapper[5016]: E1011 07:42:41.889864 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.389853669 +0000 UTC m=+150.290309605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.893940 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.906086 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm66b\" (UniqueName: \"kubernetes.io/projected/eb6630cb-0062-4461-bf51-c45f7e4e7478-kube-api-access-rm66b\") pod \"console-f9d7485db-vmvvh\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.925314 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj8v8\" (UniqueName: \"kubernetes.io/projected/fee2cde4-61cd-42ab-a6c6-8dbe8e5a123e-kube-api-access-fj8v8\") pod \"migrator-59844c95c7-6n96l\" (UID: \"fee2cde4-61cd-42ab-a6c6-8dbe8e5a123e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.935952 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-znwnv" event={"ID":"fee3401d-bf88-49cd-b228-a4e89c6dd40e","Type":"ContainerStarted","Data":"4b0ed1a38ad8f3c2e8aa7a1ff014ddfa636a405a256e70cb15a12d1279a0e851"} Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.937290 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.950478 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-45lst"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.951906 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" event={"ID":"8424bee6-8168-4c9f-b70e-5523e1990bcd","Type":"ContainerStarted","Data":"27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa"} Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.951942 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" event={"ID":"8424bee6-8168-4c9f-b70e-5523e1990bcd","Type":"ContainerStarted","Data":"109b53f176ad5b7297f23394f60c2748a53cbd30abb9f5368d75e90583b86294"} Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.951958 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j"] Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.952823 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:41 crc kubenswrapper[5016]: W1011 07:42:41.953151 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9154515_364c_477a_8471_cf3d40b138b2.slice/crio-f11aa1ccd6cadfeb1d814ebf462134cd82a52e12ade99e9e52c84e82852cd422 WatchSource:0}: Error finding container f11aa1ccd6cadfeb1d814ebf462134cd82a52e12ade99e9e52c84e82852cd422: Status 404 returned error can't find the container with id f11aa1ccd6cadfeb1d814ebf462134cd82a52e12ade99e9e52c84e82852cd422 Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.956949 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" event={"ID":"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1","Type":"ContainerStarted","Data":"ed032b9943e769428250cbfa97496b9f451a789c659c8069ad3214fd77f4c01e"} Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.958011 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.958166 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" event={"ID":"699f1f0c-fc1d-4599-97a8-a135238977b4","Type":"ContainerStarted","Data":"cf292dd6de08a66d854924eb307bfc7e9d0354ae398e767686890c49df9f4e52"} Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.961080 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6zfs\" (UniqueName: \"kubernetes.io/projected/548549e8-8855-421a-95d7-f57b74ae500a-kube-api-access-w6zfs\") pod \"multus-admission-controller-857f4d67dd-tq4ks\" (UID: \"548549e8-8855-421a-95d7-f57b74ae500a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.961223 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" event={"ID":"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5","Type":"ContainerStarted","Data":"9b670f5483090fd198852d6e42f4f76d404f68bedeb99600f8917a6181cec43a"} Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.966775 5016 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-gwp6t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.966833 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" podUID="8424bee6-8168-4c9f-b70e-5523e1990bcd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.971113 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-584xz\" (UniqueName: \"kubernetes.io/projected/d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff-kube-api-access-584xz\") pod \"service-ca-operator-777779d784-5c8fg\" (UID: \"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.972696 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.977443 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" event={"ID":"63c09395-8cfa-4337-8323-0a90e333579a","Type":"ContainerStarted","Data":"b975156e127bd0cf67d303a5724a94284dbfe154367958830635ca688e93dce6"} Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.984960 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrghg\" (UniqueName: \"kubernetes.io/projected/12dfb419-e03a-48b3-b448-225f83bd8de3-kube-api-access-wrghg\") pod \"marketplace-operator-79b997595-5bblf\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.986465 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" event={"ID":"4b4fc34c-84fa-4a44-a585-61d852838755","Type":"ContainerStarted","Data":"c8b2a49a475792ee0bd1f8bdb54d789e44d6aa81d0d7191e6e1c11f5f583c39f"} Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.990553 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.991357 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:41 crc kubenswrapper[5016]: E1011 07:42:41.991850 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.491830387 +0000 UTC m=+150.392286333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.992409 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" event={"ID":"b3f53591-5cb2-488a-b327-c41c05c5845f","Type":"ContainerStarted","Data":"7cc2cb91ab47650f4efd4579f7e15294d63fef4727d4c878e8c5fc253f07302a"} Oct 11 07:42:41 crc kubenswrapper[5016]: I1011 07:42:41.992447 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" event={"ID":"b3f53591-5cb2-488a-b327-c41c05c5845f","Type":"ContainerStarted","Data":"7a451dcbb4d03b92ab02acd728c984353b585e1b39a113ab778ff10bf19acd06"} Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.001143 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.005117 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.011438 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d87479dc-a437-4c51-8d14-5f6ef03f3220-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hvh49\" (UID: \"d87479dc-a437-4c51-8d14-5f6ef03f3220\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.033292 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjd2s\" (UniqueName: \"kubernetes.io/projected/6e14139c-7a42-440e-b494-f2a6283a1acd-kube-api-access-gjd2s\") pod \"control-plane-machine-set-operator-78cbb6b69f-msb9j\" (UID: \"6e14139c-7a42-440e-b494-f2a6283a1acd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.033489 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mcbqj" event={"ID":"b941f947-2402-495b-8808-0f91ab9433e0","Type":"ContainerStarted","Data":"b7434ac6e6e0287af41d2a46def69811fe174b69e7ea1d04933d047de4e1392b"} Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.036813 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.045354 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-b65rs"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.055922 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" event={"ID":"0f1531c1-e77c-4e97-a216-2809a7566070","Type":"ContainerStarted","Data":"2b5ddeedae6e6c75830edbcdbed362c9dceda9c33732c0eea494c837a8048bcc"} Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.057289 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-twpkv"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.057814 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-fsx9v"] Oct 11 07:42:42 crc kubenswrapper[5016]: W1011 07:42:42.083622 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87501447_4d19_45a7_bb18_636d9cec793e.slice/crio-57d6154321633cba49262d5674686c4260b9c6fb9f342d041c5f6a764325401c WatchSource:0}: Error finding container 57d6154321633cba49262d5674686c4260b9c6fb9f342d041c5f6a764325401c: Status 404 returned error can't find the container with id 57d6154321633cba49262d5674686c4260b9c6fb9f342d041c5f6a764325401c Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.093819 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv8rd\" (UniqueName: \"kubernetes.io/projected/13c6ab26-b0c2-4a8f-aada-8845d88c408a-kube-api-access-qv8rd\") pod \"machine-config-server-fj8bb\" (UID: \"13c6ab26-b0c2-4a8f-aada-8845d88c408a\") " pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.094357 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.095932 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.595921406 +0000 UTC m=+150.496377352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.097100 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txx8k\" (UniqueName: \"kubernetes.io/projected/d861cd75-fbdc-403a-8538-903cd646ce64-kube-api-access-txx8k\") pod \"ingress-canary-g74hb\" (UID: \"d861cd75-fbdc-403a-8538-903cd646ce64\") " pod="openshift-ingress-canary/ingress-canary-g74hb" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.108131 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" event={"ID":"394f8f2b-fe85-414f-ab93-670b5291ac1b","Type":"ContainerStarted","Data":"fa8f00e0a242deeeda2149f45119ce066afabcf682da8a8395af81e8300efcfd"} Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.112551 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mn4hd" event={"ID":"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0","Type":"ContainerStarted","Data":"22507e8ac17190a596ab483e0643e93f37c5a6bc1360bd5ea0b1d8a477e5006f"} Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.112596 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mn4hd" event={"ID":"76f2e3c8-c16d-4a3e-85d9-25cc30605ea0","Type":"ContainerStarted","Data":"9383c1624e5b341517984c811b1fc0f9d7b89f88bd62f1d9c4f19e075f1c11b3"} Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.114055 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.127475 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.129420 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.172172 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.176766 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.183101 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-cz2f7"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.188393 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.195928 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.196802 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.696771242 +0000 UTC m=+150.597227208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.200232 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.217878 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l" Oct 11 07:42:42 crc kubenswrapper[5016]: W1011 07:42:42.224849 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76eba8ae_c2d4_49ac_9cb1_4e7256c144ff.slice/crio-ade358b776d12c032770281ce37a28914bd3288f7155fc6a7bdcf4b19333bc5a WatchSource:0}: Error finding container ade358b776d12c032770281ce37a28914bd3288f7155fc6a7bdcf4b19333bc5a: Status 404 returned error can't find the container with id ade358b776d12c032770281ce37a28914bd3288f7155fc6a7bdcf4b19333bc5a Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.237889 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" Oct 11 07:42:42 crc kubenswrapper[5016]: W1011 07:42:42.276980 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a8034ba_7018_481a_862e_8f21457cc04f.slice/crio-c7acd54e62ba25839a7307cd563f60c4124c4dcc988323ae31fff6ead721917c WatchSource:0}: Error finding container c7acd54e62ba25839a7307cd563f60c4124c4dcc988323ae31fff6ead721917c: Status 404 returned error can't find the container with id c7acd54e62ba25839a7307cd563f60c4124c4dcc988323ae31fff6ead721917c Oct 11 07:42:42 crc kubenswrapper[5016]: W1011 07:42:42.282365 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5191ad1_f211_45d1_a108_8d45b9d427f6.slice/crio-1aef60a2134a6e31c1b38756f6def3074455acea6490a61c53c0cf0418e4c742 WatchSource:0}: Error finding container 1aef60a2134a6e31c1b38756f6def3074455acea6490a61c53c0cf0418e4c742: Status 404 returned error can't find the container with id 1aef60a2134a6e31c1b38756f6def3074455acea6490a61c53c0cf0418e4c742 Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.299451 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.299953 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.799937392 +0000 UTC m=+150.700393338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.363152 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fj8bb" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.369115 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g74hb" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.400434 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t5tt6"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.402485 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.403128 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:42.903102954 +0000 UTC m=+150.803558900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.418123 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.423095 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:42 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:42 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:42 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.424498 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.449733 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.462238 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" podStartSLOduration=128.462220612 podStartE2EDuration="2m8.462220612s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:42.46142006 +0000 UTC m=+150.361875996" watchObservedRunningTime="2025-10-11 07:42:42.462220612 +0000 UTC m=+150.362676558" Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.504123 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.504414 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.004402593 +0000 UTC m=+150.904858529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.529324 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.608664 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.608835 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.108809539 +0000 UTC m=+151.009265485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.609195 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.609520 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.109508779 +0000 UTC m=+151.009964725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: W1011 07:42:42.655024 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6352f37_0c6d_4ec1_961b_2d46944fd666.slice/crio-f380749d63666673883f99848da1fa179fcc36ee03ae28ca5fdcb289f7ac9ca5 WatchSource:0}: Error finding container f380749d63666673883f99848da1fa179fcc36ee03ae28ca5fdcb289f7ac9ca5: Status 404 returned error can't find the container with id f380749d63666673883f99848da1fa179fcc36ee03ae28ca5fdcb289f7ac9ca5 Oct 11 07:42:42 crc kubenswrapper[5016]: W1011 07:42:42.657739 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode50901a7_482c_401b_96cf_cf925c66e918.slice/crio-cba94772b4de4e202282a276778109fc13c6f61a563e6ebe935518fdd22992a5 WatchSource:0}: Error finding container cba94772b4de4e202282a276778109fc13c6f61a563e6ebe935518fdd22992a5: Status 404 returned error can't find the container with id cba94772b4de4e202282a276778109fc13c6f61a563e6ebe935518fdd22992a5 Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.709973 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.710538 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.210524169 +0000 UTC m=+151.110980115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.813725 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.814050 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.314036231 +0000 UTC m=+151.214492177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.857166 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bblf"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.872091 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.878808 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.907252 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.911064 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.915033 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:42 crc kubenswrapper[5016]: E1011 07:42:42.915387 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.415368081 +0000 UTC m=+151.315824027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.931313 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vmvvh"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.932816 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mhg9"] Oct 11 07:42:42 crc kubenswrapper[5016]: I1011 07:42:42.988895 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m"] Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.019464 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.019551 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.019570 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.019596 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.019615 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.021451 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.521436665 +0000 UTC m=+151.421892611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.025098 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.030904 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.032649 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.040401 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:43 crc kubenswrapper[5016]: W1011 07:42:43.083621 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ab53702_6616_46f5_aa33_fe13d748abb2.slice/crio-b03aea0c258286aad781f4c3a835abd45c1c3ab611b4ea84bfc855c05b0d41e2 WatchSource:0}: Error finding container b03aea0c258286aad781f4c3a835abd45c1c3ab611b4ea84bfc855c05b0d41e2: Status 404 returned error can't find the container with id b03aea0c258286aad781f4c3a835abd45c1c3ab611b4ea84bfc855c05b0d41e2 Oct 11 07:42:43 crc kubenswrapper[5016]: W1011 07:42:43.094056 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b04b8b6_3686_4217_b79b_374396ed61ec.slice/crio-247ee07f4d08af680aedc3c4ce74b3b44802a7b8aca50c60142a59695f7b03c4 WatchSource:0}: Error finding container 247ee07f4d08af680aedc3c4ce74b3b44802a7b8aca50c60142a59695f7b03c4: Status 404 returned error can't find the container with id 247ee07f4d08af680aedc3c4ce74b3b44802a7b8aca50c60142a59695f7b03c4 Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.120396 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.121472 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.621447156 +0000 UTC m=+151.521903102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:43 crc kubenswrapper[5016]: W1011 07:42:43.183426 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c212d37_a525_4cf4_a484_1a719dc3237d.slice/crio-a37545703bf7ef45f5341264fd869f7c789c1f6a1fdcaa08399f085b9094462d WatchSource:0}: Error finding container a37545703bf7ef45f5341264fd869f7c789c1f6a1fdcaa08399f085b9094462d: Status 404 returned error can't find the container with id a37545703bf7ef45f5341264fd869f7c789c1f6a1fdcaa08399f085b9094462d Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.189023 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mn4hd" podStartSLOduration=128.188979842 podStartE2EDuration="2m8.188979842s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:43.185427032 +0000 UTC m=+151.085882978" watchObservedRunningTime="2025-10-11 07:42:43.188979842 +0000 UTC m=+151.089435788" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.222614 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.222974 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.722963071 +0000 UTC m=+151.623419017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.253728 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.265648 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.265998 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.299345 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" event={"ID":"4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5","Type":"ContainerStarted","Data":"fee2ac544cf22c88adbd08f5fdd04fe26b853ded53c65597539ec0b05ee1dccb"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.301607 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" event={"ID":"b1ed3ee2-21f7-4552-97ae-1524d469aa1a","Type":"ContainerStarted","Data":"d2f2b321dc8d7478a38de6f7d0546204d947ad6f596735ada165ab3844f3ed4e"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.301639 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" event={"ID":"b1ed3ee2-21f7-4552-97ae-1524d469aa1a","Type":"ContainerStarted","Data":"418721a458bc17c1fd9883888dbdad344fc31355a7518be1e7f1c9e3752a40bb"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.306111 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz"] Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.310712 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b65rs" event={"ID":"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2","Type":"ContainerStarted","Data":"2ff6faa26845f5fbcaa4fe79fad6f7c75a281633f1120cbfe6e125f767faf7fc"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.329481 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.329918 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.829899849 +0000 UTC m=+151.730355805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.331954 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j"] Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.334189 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" event={"ID":"cebf51e6-36da-416a-9f26-d312d6118895","Type":"ContainerStarted","Data":"797a04f71aeb64150e141db0c8d5d7b55a848adba705f86701882ceeee7f2f6e"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.334232 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" event={"ID":"cebf51e6-36da-416a-9f26-d312d6118895","Type":"ContainerStarted","Data":"d3fa7b62265948607f499b3ca0bd2e3fedefdd75d226560a4f9abbf7d39fdbc6"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.335021 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.340424 5016 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g8p9j container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.340472 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" podUID="cebf51e6-36da-416a-9f26-d312d6118895" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.351726 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" event={"ID":"b3f53591-5cb2-488a-b327-c41c05c5845f","Type":"ContainerStarted","Data":"0e3e17d24c80dd08592f1f90c8fac33c9b60fd32f69b9f5d91aba5115a9e4d60"} Oct 11 07:42:43 crc kubenswrapper[5016]: W1011 07:42:43.364814 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e14139c_7a42_440e_b494_f2a6283a1acd.slice/crio-3b3d733f796d4c6910e6fd30aa37f0f2eae19c050d96d9a0fdee0a02225835de WatchSource:0}: Error finding container 3b3d733f796d4c6910e6fd30aa37f0f2eae19c050d96d9a0fdee0a02225835de: Status 404 returned error can't find the container with id 3b3d733f796d4c6910e6fd30aa37f0f2eae19c050d96d9a0fdee0a02225835de Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.382176 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-45lst" event={"ID":"4c0a1c1b-7182-49e6-b027-d766ec67481d","Type":"ContainerStarted","Data":"aaa8ee623b566ad4b7d204fd62b3d626b7ea473e8f109f9f10bb99c733d65325"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.382216 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-45lst" event={"ID":"4c0a1c1b-7182-49e6-b027-d766ec67481d","Type":"ContainerStarted","Data":"980a54dffc4668aeb1ebb14dab674e7364a43ad8bb97e5d35310b2523d5db58c"} Oct 11 07:42:43 crc kubenswrapper[5016]: W1011 07:42:43.401252 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ddda55d_dea4_490a_bdc6_a004fb25358c.slice/crio-abd8a96d13c264605839fbfceab886ebf524615b6b50df1768ef01c72960d3eb WatchSource:0}: Error finding container abd8a96d13c264605839fbfceab886ebf524615b6b50df1768ef01c72960d3eb: Status 404 returned error can't find the container with id abd8a96d13c264605839fbfceab886ebf524615b6b50df1768ef01c72960d3eb Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.401900 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-tq4ks"] Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.427783 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mcbqj" event={"ID":"b941f947-2402-495b-8808-0f91ab9433e0","Type":"ContainerStarted","Data":"69f2ab8e15978bcdb766c9fcdc91128954f0ee541195c53b91581d7017495a30"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.427920 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.432370 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.433233 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:43.933211485 +0000 UTC m=+151.833667511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.435758 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" event={"ID":"2a8034ba-7018-481a-862e-8f21457cc04f","Type":"ContainerStarted","Data":"c7acd54e62ba25839a7307cd563f60c4124c4dcc988323ae31fff6ead721917c"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.436488 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:43 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:43 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:43 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.436516 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.439538 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-znwnv" event={"ID":"fee3401d-bf88-49cd-b228-a4e89c6dd40e","Type":"ContainerStarted","Data":"fef7a15e2ea6349a035a0c4ff162d88f0f6e95da2c6a8262dc8346e4fbb690b2"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.440448 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-znwnv" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.471099 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" event={"ID":"87501447-4d19-45a7-bb18-636d9cec793e","Type":"ContainerStarted","Data":"20454236300a4a4f0876b8eac66a45924707714e12f2f4681a24d0adb93e3ad0"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.471155 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" event={"ID":"87501447-4d19-45a7-bb18-636d9cec793e","Type":"ContainerStarted","Data":"57d6154321633cba49262d5674686c4260b9c6fb9f342d041c5f6a764325401c"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.471514 5016 patch_prober.go:28] interesting pod/console-operator-58897d9998-mcbqj container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.471542 5016 patch_prober.go:28] interesting pod/downloads-7954f5f757-znwnv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.471558 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mcbqj" podUID="b941f947-2402-495b-8808-0f91ab9433e0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.471575 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-znwnv" podUID="fee3401d-bf88-49cd-b228-a4e89c6dd40e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.488213 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" event={"ID":"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff","Type":"ContainerStarted","Data":"326ee8bf84649e45953b3ef265fb161f86acb3b08676a1f23ab3bd47ca4326b7"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.495519 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fj8bb" event={"ID":"13c6ab26-b0c2-4a8f-aada-8845d88c408a","Type":"ContainerStarted","Data":"1db318b2a5aa7bf4312fa62859baa880aea85b54cb526c0fc60055355f6ced90"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.513615 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" event={"ID":"a64d3a16-dcff-45cd-b0ff-c783d34728c8","Type":"ContainerStarted","Data":"c7ec9834cd944223efa041a5813af96a09247e4dc7e5a7118cd85aa1107a0781"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.533229 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49"] Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.534602 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.535864 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.035847691 +0000 UTC m=+151.936303637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.555714 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l"] Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.594984 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" event={"ID":"c4a84d0a-86cb-46a4-9ef9-d2d9b2e712d5","Type":"ContainerStarted","Data":"b9c4facec3ad4be79ce3857dba5221f243bdc5ed6a53d2a4d9d0dfaa132638d6"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.596169 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.611104 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mcbqj" podStartSLOduration=129.611080595 podStartE2EDuration="2m9.611080595s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:43.596373749 +0000 UTC m=+151.496829695" watchObservedRunningTime="2025-10-11 07:42:43.611080595 +0000 UTC m=+151.511536541" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.630006 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.630192 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" event={"ID":"a9154515-364c-477a-8471-cf3d40b138b2","Type":"ContainerStarted","Data":"348f013cf9880c9e0c4bf3f32a2bbb649f575c01f1e769eca3899038fe1e9c9c"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.630211 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" event={"ID":"a9154515-364c-477a-8471-cf3d40b138b2","Type":"ContainerStarted","Data":"f11aa1ccd6cadfeb1d814ebf462134cd82a52e12ade99e9e52c84e82852cd422"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.639492 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.640531 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.140516125 +0000 UTC m=+152.040972071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.641107 5016 generic.go:334] "Generic (PLEG): container finished" podID="1f0257f9-e7c9-4951-ba43-4ba90a80c1e1" containerID="4c29a70ffdf0e24a08576aa273018d0b13d8be613aeeeb2a33abfd77cd82d1f6" exitCode=0 Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.641174 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" event={"ID":"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1","Type":"ContainerDied","Data":"4c29a70ffdf0e24a08576aa273018d0b13d8be613aeeeb2a33abfd77cd82d1f6"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.642844 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-twpkv" podStartSLOduration=128.64282155 podStartE2EDuration="2m8.64282155s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:43.634419273 +0000 UTC m=+151.534875219" watchObservedRunningTime="2025-10-11 07:42:43.64282155 +0000 UTC m=+151.543277486" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.679548 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g74hb"] Oct 11 07:42:43 crc kubenswrapper[5016]: W1011 07:42:43.693364 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd87479dc_a437_4c51_8d14_5f6ef03f3220.slice/crio-84463b510b2bedb12c4da21f937d6483d0626a6f683dc98eb332b401f04ad390 WatchSource:0}: Error finding container 84463b510b2bedb12c4da21f937d6483d0626a6f683dc98eb332b401f04ad390: Status 404 returned error can't find the container with id 84463b510b2bedb12c4da21f937d6483d0626a6f683dc98eb332b401f04ad390 Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.693576 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" event={"ID":"699f1f0c-fc1d-4599-97a8-a135238977b4","Type":"ContainerStarted","Data":"5912655979a4573f113da293667810b572866ece56e36aa6294ddcbe7c3435da"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.694479 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.710833 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4vcjt" podStartSLOduration=129.710796059 podStartE2EDuration="2m9.710796059s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:43.707097214 +0000 UTC m=+151.607553160" watchObservedRunningTime="2025-10-11 07:42:43.710796059 +0000 UTC m=+151.611252005" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.750179 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.750604 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.751560 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.251542258 +0000 UTC m=+152.151998204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.769997 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-znwnv" podStartSLOduration=129.769947958 podStartE2EDuration="2m9.769947958s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:43.769575897 +0000 UTC m=+151.670031843" watchObservedRunningTime="2025-10-11 07:42:43.769947958 +0000 UTC m=+151.670403904" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.801047 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" event={"ID":"63c09395-8cfa-4337-8323-0a90e333579a","Type":"ContainerStarted","Data":"8174cd3cc84638c2728c638c67a3def5c858018450bc8ff19b11487b990facca"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.801100 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" event={"ID":"63c09395-8cfa-4337-8323-0a90e333579a","Type":"ContainerStarted","Data":"0b4531207228c9d3da29f9cfbd2758d5727425ffa68bf181513fd03e49f639a4"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.820055 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" event={"ID":"e50901a7-482c-401b-96cf-cf925c66e918","Type":"ContainerStarted","Data":"cba94772b4de4e202282a276778109fc13c6f61a563e6ebe935518fdd22992a5"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.828137 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" podStartSLOduration=128.8281218 podStartE2EDuration="2m8.8281218s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:43.826134453 +0000 UTC m=+151.726590399" watchObservedRunningTime="2025-10-11 07:42:43.8281218 +0000 UTC m=+151.728577746" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.833419 5016 generic.go:334] "Generic (PLEG): container finished" podID="4b4fc34c-84fa-4a44-a585-61d852838755" containerID="22fd5cf9243041474c9bdeaaddee138a2fab2c520d832d7ec677e23232fd1757" exitCode=0 Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.833530 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" event={"ID":"4b4fc34c-84fa-4a44-a585-61d852838755","Type":"ContainerDied","Data":"22fd5cf9243041474c9bdeaaddee138a2fab2c520d832d7ec677e23232fd1757"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.854261 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.855300 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.355271786 +0000 UTC m=+152.255727732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.863917 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" event={"ID":"d6352f37-0c6d-4ec1-961b-2d46944fd666","Type":"ContainerStarted","Data":"f380749d63666673883f99848da1fa179fcc36ee03ae28ca5fdcb289f7ac9ca5"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.910898 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" event={"ID":"b2c1d5e1-166e-49ee-8134-5ba60fceaf56","Type":"ContainerStarted","Data":"0051e162b51e6ac0e0f06753aff2e74c2711105281f91dc45958f24cd574cbaf"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.910948 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" event={"ID":"b2c1d5e1-166e-49ee-8134-5ba60fceaf56","Type":"ContainerStarted","Data":"427b44e040e6f661a0270a89d73a0b3f52331998b2a287e9e91a425d3e525b38"} Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.944189 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5nhn" podStartSLOduration=128.944165175 podStartE2EDuration="2m8.944165175s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:43.941402327 +0000 UTC m=+151.841858273" watchObservedRunningTime="2025-10-11 07:42:43.944165175 +0000 UTC m=+151.844621121" Oct 11 07:42:43 crc kubenswrapper[5016]: I1011 07:42:43.955719 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:43 crc kubenswrapper[5016]: E1011 07:42:43.957309 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.457275654 +0000 UTC m=+152.357731640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.019241 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" podStartSLOduration=129.019220742 podStartE2EDuration="2m9.019220742s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.017516494 +0000 UTC m=+151.917972440" watchObservedRunningTime="2025-10-11 07:42:44.019220742 +0000 UTC m=+151.919676688" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.035748 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" event={"ID":"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff","Type":"ContainerStarted","Data":"011c8db447a7359a72d6545c28ed12ca7202f440550286d62f8fc68851dad560"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.035793 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" event={"ID":"76eba8ae-c2d4-49ac-9cb1-4e7256c144ff","Type":"ContainerStarted","Data":"ade358b776d12c032770281ce37a28914bd3288f7155fc6a7bdcf4b19333bc5a"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.050715 5016 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-psqvq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.050784 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" podUID="76eba8ae-c2d4-49ac-9cb1-4e7256c144ff" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.059379 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.059724 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.559711055 +0000 UTC m=+152.460166991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.060669 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.107887 5016 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a8034ba_7018_481a_862e_8f21457cc04f.slice/crio-conmon-404d90ad41969665c0ade203f68558d1549e8602f58b4f7bda456cdc177943fd.scope\": RecentStats: unable to find data in memory cache]" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.109830 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" event={"ID":"0f1531c1-e77c-4e97-a216-2809a7566070","Type":"ContainerStarted","Data":"cc3c69a7a401f8b96093fd5848a09c26c6f7228fb67739de8f15c9f5f58eef6d"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.114620 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-dqhld" podStartSLOduration=129.114597524 podStartE2EDuration="2m9.114597524s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.11267181 +0000 UTC m=+152.013127756" watchObservedRunningTime="2025-10-11 07:42:44.114597524 +0000 UTC m=+152.015053480" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.114914 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" podStartSLOduration=130.114906652 podStartE2EDuration="2m10.114906652s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.071007824 +0000 UTC m=+151.971463770" watchObservedRunningTime="2025-10-11 07:42:44.114906652 +0000 UTC m=+152.015362628" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.122566 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" event={"ID":"a2fca8b5-8ccb-4100-8570-82b07bdae3ee","Type":"ContainerStarted","Data":"75a011fcbc4c849ec1e506fbdc328a7fc66a856e7a8b26e53b7ee3501bef9b13"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.122634 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" event={"ID":"a2fca8b5-8ccb-4100-8570-82b07bdae3ee","Type":"ContainerStarted","Data":"edebe48bb85259f3d5a9fc452fbc1a3fc4150df3f9b10bafaa39ce32c51559d3"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.157042 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-sv2tw" podStartSLOduration=129.157026952 podStartE2EDuration="2m9.157026952s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.150182679 +0000 UTC m=+152.050638635" watchObservedRunningTime="2025-10-11 07:42:44.157026952 +0000 UTC m=+152.057482898" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.161303 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.162488 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.662472815 +0000 UTC m=+152.562928761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.202982 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" event={"ID":"394f8f2b-fe85-414f-ab93-670b5291ac1b","Type":"ContainerStarted","Data":"f63017d7bcfae99a0d79e1d502d02f06a9241b0b746a777ca83e7bfb5867b8eb"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.260116 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-fsx9v" podStartSLOduration=130.26008452 podStartE2EDuration="2m10.26008452s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.250255862 +0000 UTC m=+152.150711818" watchObservedRunningTime="2025-10-11 07:42:44.26008452 +0000 UTC m=+152.160540496" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.263399 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.266759 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.766736678 +0000 UTC m=+152.667192624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.279619 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" event={"ID":"b5191ad1-f211-45d1-a108-8d45b9d427f6","Type":"ContainerStarted","Data":"1aef60a2134a6e31c1b38756f6def3074455acea6490a61c53c0cf0418e4c742"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.314451 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" event={"ID":"12dfb419-e03a-48b3-b448-225f83bd8de3","Type":"ContainerStarted","Data":"42a8dea694e8589b0d5a930d61dec7ec4a6b5e4807c2c4a31dc67cf026e58054"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.319810 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4jcvc" podStartSLOduration=130.319736234 podStartE2EDuration="2m10.319736234s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.315804082 +0000 UTC m=+152.216260038" watchObservedRunningTime="2025-10-11 07:42:44.319736234 +0000 UTC m=+152.220192180" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.349120 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" event={"ID":"0b04b8b6-3686-4217-b79b-374396ed61ec","Type":"ContainerStarted","Data":"247ee07f4d08af680aedc3c4ce74b3b44802a7b8aca50c60142a59695f7b03c4"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.367322 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.369084 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.869066586 +0000 UTC m=+152.769522532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.386984 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" event={"ID":"56ca2fcf-3f5b-4074-9d84-12f089e816a9","Type":"ContainerStarted","Data":"d534861f746b39cfe2166032dc2ec868f709e193be670122e449573290676680"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.406881 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" event={"ID":"2ab53702-6616-46f5-aa33-fe13d748abb2","Type":"ContainerStarted","Data":"b03aea0c258286aad781f4c3a835abd45c1c3ab611b4ea84bfc855c05b0d41e2"} Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.428238 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" podStartSLOduration=129.428220995 podStartE2EDuration="2m9.428220995s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.386865168 +0000 UTC m=+152.287321114" watchObservedRunningTime="2025-10-11 07:42:44.428220995 +0000 UTC m=+152.328676941" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.437288 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:44 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:44 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:44 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.437347 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.454033 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.471484 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.497836 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:44.997817529 +0000 UTC m=+152.898273475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.508993 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" podStartSLOduration=130.508973504 podStartE2EDuration="2m10.508973504s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.430688044 +0000 UTC m=+152.331143990" watchObservedRunningTime="2025-10-11 07:42:44.508973504 +0000 UTC m=+152.409429450" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.533556 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" podStartSLOduration=129.533537667 podStartE2EDuration="2m9.533537667s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.48972772 +0000 UTC m=+152.390183666" watchObservedRunningTime="2025-10-11 07:42:44.533537667 +0000 UTC m=+152.433993613" Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.573108 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.573583 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.073549626 +0000 UTC m=+152.974005572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.675104 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.675566 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.175550605 +0000 UTC m=+153.076006551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.798989 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.799827 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.299808362 +0000 UTC m=+153.200264308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.905910 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:44 crc kubenswrapper[5016]: E1011 07:42:44.906828 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.406811611 +0000 UTC m=+153.307267557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:44 crc kubenswrapper[5016]: I1011 07:42:44.908623 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" podStartSLOduration=130.908596561 podStartE2EDuration="2m10.908596561s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:44.891868439 +0000 UTC m=+152.792324385" watchObservedRunningTime="2025-10-11 07:42:44.908596561 +0000 UTC m=+152.809052507" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.010322 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.010773 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.510754974 +0000 UTC m=+153.411210920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.133366 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.133744 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.633725825 +0000 UTC m=+153.534181771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.270730 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.271444 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.7714084 +0000 UTC m=+153.671864346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.271558 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.272036 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.772018948 +0000 UTC m=+153.672474894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.315762 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" podStartSLOduration=130.315743402 podStartE2EDuration="2m10.315743402s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:45.223032786 +0000 UTC m=+153.123488732" watchObservedRunningTime="2025-10-11 07:42:45.315743402 +0000 UTC m=+153.216199348" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.373776 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.374095 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.874077568 +0000 UTC m=+153.774533514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.438881 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:45 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:45 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:45 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.439206 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.478768 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.479135 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:45.979123363 +0000 UTC m=+153.879579309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.515254 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" event={"ID":"87501447-4d19-45a7-bb18-636d9cec793e","Type":"ContainerStarted","Data":"71f48b92b2ff3482dbca831b6e5818ebf44ceef5ae060e187c112a629db62556"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.555922 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-52nk2" podStartSLOduration=130.55590496 podStartE2EDuration="2m10.55590496s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:45.553179303 +0000 UTC m=+153.453635249" watchObservedRunningTime="2025-10-11 07:42:45.55590496 +0000 UTC m=+153.456360906" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.568272 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fj8bb" event={"ID":"13c6ab26-b0c2-4a8f-aada-8845d88c408a","Type":"ContainerStarted","Data":"49f96a03e06e3a7b638ca0e9217142889254f4eb423b69f8beeb9e91bcb89629"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.582144 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.582544 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:46.082528301 +0000 UTC m=+153.982984247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.615213 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jxdrh" event={"ID":"0f1531c1-e77c-4e97-a216-2809a7566070","Type":"ContainerStarted","Data":"f5e8969e04dd674c6458330466c7c3376be29fd5070a191bae20ef369bd26171"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.635132 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l" event={"ID":"fee2cde4-61cd-42ab-a6c6-8dbe8e5a123e","Type":"ContainerStarted","Data":"4df912dc1791a7bd06b625a1c57eb0bd28a26125f7e56216c23085e5e70e9c52"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.687732 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.692922 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:46.192904816 +0000 UTC m=+154.093360812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.738083 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-45lst" event={"ID":"4c0a1c1b-7182-49e6-b027-d766ec67481d","Type":"ContainerStarted","Data":"0c8866bc8fa815c778f4f6ff0b069d3a395aa0281163d7e0eaac4ed48c205bf0"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.738151 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-45lst" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.760801 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g74hb" event={"ID":"d861cd75-fbdc-403a-8538-903cd646ce64","Type":"ContainerStarted","Data":"c7464ed4355c4f28484d38313f705808468795d289436cd3f7b90866b3db0db4"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.760848 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g74hb" event={"ID":"d861cd75-fbdc-403a-8538-903cd646ce64","Type":"ContainerStarted","Data":"2597f2ba929b68c20cd2369ec0fdd90deb68d1b133ce35750474613a2fea2cef"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.778899 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a2981699d77dd0179a361fc1268cdbbfb651951172d94993d88a92daccaa48b5"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.780311 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fj8bb" podStartSLOduration=7.780291252 podStartE2EDuration="7.780291252s" podCreationTimestamp="2025-10-11 07:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:45.686981998 +0000 UTC m=+153.587437944" watchObservedRunningTime="2025-10-11 07:42:45.780291252 +0000 UTC m=+153.680747198" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.780876 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-45lst" podStartSLOduration=7.780870768 podStartE2EDuration="7.780870768s" podCreationTimestamp="2025-10-11 07:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:45.779797139 +0000 UTC m=+153.680253105" watchObservedRunningTime="2025-10-11 07:42:45.780870768 +0000 UTC m=+153.681326714" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.788777 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.789174 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:46.289159282 +0000 UTC m=+154.189615228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.799479 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" event={"ID":"a64d3a16-dcff-45cd-b0ff-c783d34728c8","Type":"ContainerStarted","Data":"856d84fd074303e8109383ba579028e9489c27ff028ad755956aba8da3f03d56"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.812813 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-g74hb" podStartSLOduration=7.812790249 podStartE2EDuration="7.812790249s" podCreationTimestamp="2025-10-11 07:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:45.801695966 +0000 UTC m=+153.702151912" watchObservedRunningTime="2025-10-11 07:42:45.812790249 +0000 UTC m=+153.713246195" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.818580 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" event={"ID":"5ddda55d-dea4-490a-bdc6-a004fb25358c","Type":"ContainerStarted","Data":"abd8a96d13c264605839fbfceab886ebf524615b6b50df1768ef01c72960d3eb"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.842065 5016 generic.go:334] "Generic (PLEG): container finished" podID="2a8034ba-7018-481a-862e-8f21457cc04f" containerID="404d90ad41969665c0ade203f68558d1549e8602f58b4f7bda456cdc177943fd" exitCode=0 Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.842331 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" event={"ID":"2a8034ba-7018-481a-862e-8f21457cc04f","Type":"ContainerDied","Data":"404d90ad41969665c0ade203f68558d1549e8602f58b4f7bda456cdc177943fd"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.855559 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" event={"ID":"548549e8-8855-421a-95d7-f57b74ae500a","Type":"ContainerStarted","Data":"8f17a40d4bcf13768c0b7f2e7921510cd29ff58b443c5e17245b171d954da9c4"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.863131 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" event={"ID":"d90a5c06-3c9f-451a-bf6f-8caa3ffdf6ff","Type":"ContainerStarted","Data":"091f5e57ca1acf926104704a5af5c38b8c34e4b0c50abedb67907adaec36719b"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.863698 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" podStartSLOduration=130.863662554 podStartE2EDuration="2m10.863662554s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:45.862901103 +0000 UTC m=+153.763357049" watchObservedRunningTime="2025-10-11 07:42:45.863662554 +0000 UTC m=+153.764118500" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.882406 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" event={"ID":"d6352f37-0c6d-4ec1-961b-2d46944fd666","Type":"ContainerStarted","Data":"1da5f8e981b59bf5cf4b92369e4afa6efb9f10a172560648b0444c6a7613b81a"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.889915 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:45 crc kubenswrapper[5016]: E1011 07:42:45.892468 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:46.392450787 +0000 UTC m=+154.292906733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.900734 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9bdcbb6dfb4c489f6db2468da928bb32b2cb3afc5bdf4626ecb80c96e63f5d08"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.929418 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" event={"ID":"d87479dc-a437-4c51-8d14-5f6ef03f3220","Type":"ContainerStarted","Data":"84463b510b2bedb12c4da21f937d6483d0626a6f683dc98eb332b401f04ad390"} Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.935431 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-dh44f" podStartSLOduration=130.93542024 podStartE2EDuration="2m10.93542024s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:45.932964281 +0000 UTC m=+153.833420227" watchObservedRunningTime="2025-10-11 07:42:45.93542024 +0000 UTC m=+153.835876186" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.978712 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-5c8fg" podStartSLOduration=130.978696412 podStartE2EDuration="2m10.978696412s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:45.977165309 +0000 UTC m=+153.877621255" watchObservedRunningTime="2025-10-11 07:42:45.978696412 +0000 UTC m=+153.879152358" Oct 11 07:42:45 crc kubenswrapper[5016]: I1011 07:42:45.995114 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" event={"ID":"4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5","Type":"ContainerStarted","Data":"ef103008751bf8e97e57f7c6ed68ae534420b058916febbce0f105443f663c29"} Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.000497 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.001819 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.003534 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:46.503507861 +0000 UTC m=+154.403963867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.055192 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-cz2f7" event={"ID":"b5191ad1-f211-45d1-a108-8d45b9d427f6","Type":"ContainerStarted","Data":"52d85d688e39af3e7ecf282775f56919c9e5ee0c72bf9a808835017b79a0ad03"} Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.079915 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" podStartSLOduration=131.079895307 podStartE2EDuration="2m11.079895307s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:46.037845121 +0000 UTC m=+153.938301067" watchObservedRunningTime="2025-10-11 07:42:46.079895307 +0000 UTC m=+153.980351253" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.081812 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" event={"ID":"7c212d37-a525-4cf4-a484-1a719dc3237d","Type":"ContainerStarted","Data":"a37545703bf7ef45f5341264fd869f7c789c1f6a1fdcaa08399f085b9094462d"} Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.103914 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.105509 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:46.605490269 +0000 UTC m=+154.505946275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.113972 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-xsjnn" event={"ID":"e50901a7-482c-401b-96cf-cf925c66e918","Type":"ContainerStarted","Data":"53ee6c3586cbb3c1f01018ef38b05e5994a06495c499b201a54b1c1bda57e1ab"} Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.134234 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-d5bvg" event={"ID":"56ca2fcf-3f5b-4074-9d84-12f089e816a9","Type":"ContainerStarted","Data":"318727a19810f6544c6d753fcb17dc202f5f36f7102fd727a066eb28280c8ea3"} Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.156805 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" podStartSLOduration=131.156790427 podStartE2EDuration="2m11.156790427s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:46.154643746 +0000 UTC m=+154.055099692" watchObservedRunningTime="2025-10-11 07:42:46.156790427 +0000 UTC m=+154.057246373" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.186140 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vmvvh" event={"ID":"eb6630cb-0062-4461-bf51-c45f7e4e7478","Type":"ContainerStarted","Data":"b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb"} Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.186185 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vmvvh" event={"ID":"eb6630cb-0062-4461-bf51-c45f7e4e7478","Type":"ContainerStarted","Data":"0ef7489c7b47009888cca218c3fa3f8877247b33b26070ace8d698dcfc7bbe68"} Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.206104 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.207234 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:46.707218771 +0000 UTC m=+154.607674717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.221858 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" event={"ID":"6e14139c-7a42-440e-b494-f2a6283a1acd","Type":"ContainerStarted","Data":"3b3d733f796d4c6910e6fd30aa37f0f2eae19c050d96d9a0fdee0a02225835de"} Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.239963 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.250362 5016 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6mhg9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" start-of-body= Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.250428 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" podUID="0b04b8b6-3686-4217-b79b-374396ed61ec" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.252808 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.254035 5016 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5bblf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.254077 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" podUID="12dfb419-e03a-48b3-b448-225f83bd8de3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.273983 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vmvvh" podStartSLOduration=132.273959694 podStartE2EDuration="2m12.273959694s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:46.230163728 +0000 UTC m=+154.130619674" watchObservedRunningTime="2025-10-11 07:42:46.273959694 +0000 UTC m=+154.174415630" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.274346 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" podStartSLOduration=131.274339455 podStartE2EDuration="2m11.274339455s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:46.274124349 +0000 UTC m=+154.174580295" watchObservedRunningTime="2025-10-11 07:42:46.274339455 +0000 UTC m=+154.174795401" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.308688 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.312052 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:46.812037649 +0000 UTC m=+154.712493595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.312627 5016 patch_prober.go:28] interesting pod/downloads-7954f5f757-znwnv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.357846 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-znwnv" podUID="fee3401d-bf88-49cd-b228-a4e89c6dd40e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.376061 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g8p9j" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.379170 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" podStartSLOduration=131.379144253 podStartE2EDuration="2m11.379144253s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:46.379061881 +0000 UTC m=+154.279517837" watchObservedRunningTime="2025-10-11 07:42:46.379144253 +0000 UTC m=+154.279600199" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.413019 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.414709 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:46.914688135 +0000 UTC m=+154.815144081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.437747 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mcbqj" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.451876 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:46 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:46 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:46 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.451947 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.518610 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.519155 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.019143414 +0000 UTC m=+154.919599360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.620988 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.621344 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.121326677 +0000 UTC m=+155.021782613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.633865 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" podStartSLOduration=132.633847171 podStartE2EDuration="2m12.633847171s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:46.539468607 +0000 UTC m=+154.439924553" watchObservedRunningTime="2025-10-11 07:42:46.633847171 +0000 UTC m=+154.534303117" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.723547 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.724022 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.223999495 +0000 UTC m=+155.124455511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.818577 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jn9bl"] Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.819486 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.825796 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.826134 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.326114567 +0000 UTC m=+155.226570513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.827820 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.866482 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn9bl"] Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.927779 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-utilities\") pod \"community-operators-jn9bl\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.927817 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-catalog-content\") pod \"community-operators-jn9bl\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.927850 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.927876 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqkjf\" (UniqueName: \"kubernetes.io/projected/29ad589f-847e-44b2-9c6c-720c6ca1312d-kube-api-access-jqkjf\") pod \"community-operators-jn9bl\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:46 crc kubenswrapper[5016]: E1011 07:42:46.928192 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.428178847 +0000 UTC m=+155.328634793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.948661 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6fmcj"] Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.949566 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.956529 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 11 07:42:46 crc kubenswrapper[5016]: I1011 07:42:46.979770 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fmcj"] Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.028865 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.028969 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.528953011 +0000 UTC m=+155.429408947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.029189 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqkjf\" (UniqueName: \"kubernetes.io/projected/29ad589f-847e-44b2-9c6c-720c6ca1312d-kube-api-access-jqkjf\") pod \"community-operators-jn9bl\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.029246 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-catalog-content\") pod \"certified-operators-6fmcj\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.029339 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-utilities\") pod \"certified-operators-6fmcj\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.029406 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-utilities\") pod \"community-operators-jn9bl\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.029450 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-catalog-content\") pod \"community-operators-jn9bl\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.029827 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.529814025 +0000 UTC m=+155.430269971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.029920 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-catalog-content\") pod \"community-operators-jn9bl\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.029914 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-utilities\") pod \"community-operators-jn9bl\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.029484 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.029988 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmdwz\" (UniqueName: \"kubernetes.io/projected/add7f50e-e0bb-45cb-b76e-c3eec203832b-kube-api-access-hmdwz\") pod \"certified-operators-6fmcj\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.073588 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqkjf\" (UniqueName: \"kubernetes.io/projected/29ad589f-847e-44b2-9c6c-720c6ca1312d-kube-api-access-jqkjf\") pod \"community-operators-jn9bl\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.089962 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-psqvq" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.130990 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.131221 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.631195286 +0000 UTC m=+155.531651232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.131432 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-catalog-content\") pod \"certified-operators-6fmcj\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.131481 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-utilities\") pod \"certified-operators-6fmcj\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.131521 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmdwz\" (UniqueName: \"kubernetes.io/projected/add7f50e-e0bb-45cb-b76e-c3eec203832b-kube-api-access-hmdwz\") pod \"certified-operators-6fmcj\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.131546 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.131866 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.631855335 +0000 UTC m=+155.532311281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.132329 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-catalog-content\") pod \"certified-operators-6fmcj\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.132580 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-utilities\") pod \"certified-operators-6fmcj\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.135328 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.173549 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmdwz\" (UniqueName: \"kubernetes.io/projected/add7f50e-e0bb-45cb-b76e-c3eec203832b-kube-api-access-hmdwz\") pod \"certified-operators-6fmcj\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.179580 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gv6cx"] Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.180462 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.194238 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gv6cx"] Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.235787 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.235949 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjr5b\" (UniqueName: \"kubernetes.io/projected/ef441d82-59b8-4316-8950-b2aea1636de4-kube-api-access-fjr5b\") pod \"community-operators-gv6cx\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.236016 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-catalog-content\") pod \"community-operators-gv6cx\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.236062 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-utilities\") pod \"community-operators-gv6cx\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.236184 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.736169019 +0000 UTC m=+155.636624965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.263684 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.337378 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-catalog-content\") pod \"community-operators-gv6cx\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.337430 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.337458 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-utilities\") pod \"community-operators-gv6cx\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.337498 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjr5b\" (UniqueName: \"kubernetes.io/projected/ef441d82-59b8-4316-8950-b2aea1636de4-kube-api-access-fjr5b\") pod \"community-operators-gv6cx\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.337859 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.837843138 +0000 UTC m=+155.738299084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.338266 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-utilities\") pod \"community-operators-gv6cx\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.340595 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-catalog-content\") pod \"community-operators-gv6cx\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.358793 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k5jkz" event={"ID":"5ddda55d-dea4-490a-bdc6-a004fb25358c","Type":"ContainerStarted","Data":"2bd1659b1571a4ca95169a79616c474691eda6c39dcad3f0c4b1b879017b19ed"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.360277 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjr5b\" (UniqueName: \"kubernetes.io/projected/ef441d82-59b8-4316-8950-b2aea1636de4-kube-api-access-fjr5b\") pod \"community-operators-gv6cx\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.361362 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fx6p5"] Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.380832 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fx6p5"] Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.381128 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f6fa0e78a68f91559783d45f36d076a96a1efcbda6bfcd08c03c0075fe9539a0"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.381235 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b65rs" event={"ID":"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2","Type":"ContainerStarted","Data":"d4c9ee2f1b13800f81441f9a10d147f934ceca4106cb9c2e02906d6f951bf6a6"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.381619 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.404399 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" event={"ID":"4b4fc34c-84fa-4a44-a585-61d852838755","Type":"ContainerStarted","Data":"ab92ee1e4d7df81c6d069828818ad15d258f3c136656a4ce2bb4936b15ddcacb"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.415442 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hvh49" event={"ID":"d87479dc-a437-4c51-8d14-5f6ef03f3220","Type":"ContainerStarted","Data":"ed3ca629ad8203da5e58609fa3fa6d90c7a5a9e9620e0f8b0f6a7b4f1d1eb965"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.438815 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.439268 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-catalog-content\") pod \"certified-operators-fx6p5\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.439294 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-utilities\") pod \"certified-operators-fx6p5\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.439367 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:47.939340782 +0000 UTC m=+155.839796728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.439439 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnr8x\" (UniqueName: \"kubernetes.io/projected/5e94d943-d0bf-4ffc-9109-3d821982dbc6-kube-api-access-rnr8x\") pod \"certified-operators-fx6p5\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.442109 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" event={"ID":"12dfb419-e03a-48b3-b448-225f83bd8de3","Type":"ContainerStarted","Data":"03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.450782 5016 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5bblf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.450840 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" podUID="12dfb419-e03a-48b3-b448-225f83bd8de3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.478461 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:47 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:47 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:47 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.478513 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.479377 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" event={"ID":"7c212d37-a525-4cf4-a484-1a719dc3237d","Type":"ContainerStarted","Data":"35b051ef64642b8ab5a82bafe711b697346cd02dafe4aa455d94198073852e8d"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.479408 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" event={"ID":"7c212d37-a525-4cf4-a484-1a719dc3237d","Type":"ContainerStarted","Data":"28f6e40c4f0e52629656c79c7f0e8a892a3718a47a88c540c319c01944194323"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.489294 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" event={"ID":"2a8034ba-7018-481a-862e-8f21457cc04f","Type":"ContainerStarted","Data":"60f24ed47dcee983f39a1d58ffc08b1bf39d3f26d519c0192f5085137c033ce1"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.489993 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.497572 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"279f4e8183357301be95a259770cb9a3e22b12f14b9e7ed0d7907e0c9b7a1ee9"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.501261 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l" event={"ID":"fee2cde4-61cd-42ab-a6c6-8dbe8e5a123e","Type":"ContainerStarted","Data":"3cb943bb3572fe04b832c6e58278b4ab9db50d954891e3f32d2b83c333324a39"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.501305 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l" event={"ID":"fee2cde4-61cd-42ab-a6c6-8dbe8e5a123e","Type":"ContainerStarted","Data":"e46951f8801ec8195091c0b94e010e87b2754020129a97ca19981caa566b16de"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.511165 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qj85m" podStartSLOduration=132.511144119 podStartE2EDuration="2m12.511144119s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:47.508978898 +0000 UTC m=+155.409434844" watchObservedRunningTime="2025-10-11 07:42:47.511144119 +0000 UTC m=+155.411600065" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.515523 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" podStartSLOduration=132.515507202 podStartE2EDuration="2m12.515507202s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:47.473995801 +0000 UTC m=+155.374451757" watchObservedRunningTime="2025-10-11 07:42:47.515507202 +0000 UTC m=+155.415963138" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.512203 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-msb9j" event={"ID":"6e14139c-7a42-440e-b494-f2a6283a1acd","Type":"ContainerStarted","Data":"325a0614804ea769f1483864321cdd76e26f4eae9829f9d19f8426acfd4a580e"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.515944 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" event={"ID":"0b04b8b6-3686-4217-b79b-374396ed61ec","Type":"ContainerStarted","Data":"42e221ea5ed3a479bf210f854386a926095233950ea4e7ab7897a3f477aaeea3"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.521881 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" event={"ID":"2ab53702-6616-46f5-aa33-fe13d748abb2","Type":"ContainerStarted","Data":"7eab9b133cffd2cf75f4a75ac957a522fd8d33d935d19127f67ffbaab3b78ef0"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.521922 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" event={"ID":"2ab53702-6616-46f5-aa33-fe13d748abb2","Type":"ContainerStarted","Data":"48880b3e8195f919b4242c39d4d6d1d0a93e5cfc0dfc29bf5b522cbaf13c1328"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.525193 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6n96l" podStartSLOduration=132.525184125 podStartE2EDuration="2m12.525184125s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:47.522730685 +0000 UTC m=+155.423186631" watchObservedRunningTime="2025-10-11 07:42:47.525184125 +0000 UTC m=+155.425640071" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.526613 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" event={"ID":"4858920c-4a23-4c7c-9b69-3bdd7d4d5ac5","Type":"ContainerStarted","Data":"73e6af1e9435a92e57b47c9bfcfc9eef568fefd59fa74dc5cb18c5a81ddfa02e"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.529931 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" event={"ID":"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1","Type":"ContainerStarted","Data":"8f24705e78fccc3b02b089e0d92e2b349c613404484b85d4034706dcaa351415"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.529959 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" event={"ID":"1f0257f9-e7c9-4951-ba43-4ba90a80c1e1","Type":"ContainerStarted","Data":"29fbe064833a37df93ab07e916fb1c76498225f14d0fe86713798ae104f66aae"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.543308 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" podStartSLOduration=133.543288066 podStartE2EDuration="2m13.543288066s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:47.543242904 +0000 UTC m=+155.443698850" watchObservedRunningTime="2025-10-11 07:42:47.543288066 +0000 UTC m=+155.443744012" Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.543748 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.043734918 +0000 UTC m=+155.944190864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.543455 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.544649 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-catalog-content\") pod \"certified-operators-fx6p5\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.544723 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-utilities\") pod \"certified-operators-fx6p5\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.544793 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnr8x\" (UniqueName: \"kubernetes.io/projected/5e94d943-d0bf-4ffc-9109-3d821982dbc6-kube-api-access-rnr8x\") pod \"certified-operators-fx6p5\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.545515 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-catalog-content\") pod \"certified-operators-fx6p5\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.545881 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-utilities\") pod \"certified-operators-fx6p5\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.559226 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.566898 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" event={"ID":"548549e8-8855-421a-95d7-f57b74ae500a","Type":"ContainerStarted","Data":"f07130e6fa190b2c8d17626882786f93abcd299ae0cd940503295536c4309996"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.566945 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" event={"ID":"548549e8-8855-421a-95d7-f57b74ae500a","Type":"ContainerStarted","Data":"af2a5c5856cb65f5a72bc18e20fc18ded95b5b96c19300abd9e7187d2738411c"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.569925 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"056c03d5ec7db66fa8256a69e8d1f9fd67c54ea15c5c0438c37f49e92573cf21"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.569975 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f4b0e66c71183368a07221c276fefd7eebbc8738f3ef4853ad012b1a10410241"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.571925 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.572692 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zftpc" podStartSLOduration=132.572682545 podStartE2EDuration="2m12.572682545s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:47.572298345 +0000 UTC m=+155.472754281" watchObservedRunningTime="2025-10-11 07:42:47.572682545 +0000 UTC m=+155.473138491" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.577080 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnr8x\" (UniqueName: \"kubernetes.io/projected/5e94d943-d0bf-4ffc-9109-3d821982dbc6-kube-api-access-rnr8x\") pod \"certified-operators-fx6p5\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.584651 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" event={"ID":"a64d3a16-dcff-45cd-b0ff-c783d34728c8","Type":"ContainerStarted","Data":"3a2762e7e5700374a21837060faa3a5e4d1f6609c59d077364b9b95125a2ed50"} Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.585146 5016 patch_prober.go:28] interesting pod/downloads-7954f5f757-znwnv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.585188 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-znwnv" podUID="fee3401d-bf88-49cd-b228-a4e89c6dd40e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.625221 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" podStartSLOduration=133.625202438 podStartE2EDuration="2m13.625202438s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:47.616343617 +0000 UTC m=+155.516799563" watchObservedRunningTime="2025-10-11 07:42:47.625202438 +0000 UTC m=+155.525658384" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.646828 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.648615 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.148585807 +0000 UTC m=+156.049041803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.651233 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-tq4ks" podStartSLOduration=132.651218802 podStartE2EDuration="2m12.651218802s" podCreationTimestamp="2025-10-11 07:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:47.650059039 +0000 UTC m=+155.550514975" watchObservedRunningTime="2025-10-11 07:42:47.651218802 +0000 UTC m=+155.551674748" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.723974 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t5tt6" podStartSLOduration=133.723960464 podStartE2EDuration="2m13.723960464s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:47.723354508 +0000 UTC m=+155.623810454" watchObservedRunningTime="2025-10-11 07:42:47.723960464 +0000 UTC m=+155.624416410" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.755789 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.756697 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.758696 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.258685185 +0000 UTC m=+156.159141131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.860244 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.860540 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.360521189 +0000 UTC m=+156.260977135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.889898 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6fmcj"] Oct 11 07:42:47 crc kubenswrapper[5016]: W1011 07:42:47.906781 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadd7f50e_e0bb_45cb_b76e_c3eec203832b.slice/crio-e53e2ae8bcbb811dc75cc75850d64a7e8528a177f4aaa0a23b5bb7def4f23086 WatchSource:0}: Error finding container e53e2ae8bcbb811dc75cc75850d64a7e8528a177f4aaa0a23b5bb7def4f23086: Status 404 returned error can't find the container with id e53e2ae8bcbb811dc75cc75850d64a7e8528a177f4aaa0a23b5bb7def4f23086 Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.910415 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:42:47 crc kubenswrapper[5016]: I1011 07:42:47.962518 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:47 crc kubenswrapper[5016]: E1011 07:42:47.962908 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.462895678 +0000 UTC m=+156.363351624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.000275 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jn9bl"] Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.064915 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.065301 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.565270867 +0000 UTC m=+156.465726813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.113703 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gv6cx"] Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.166474 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.166856 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.666840263 +0000 UTC m=+156.567296219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.217131 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fx6p5"] Oct 11 07:42:48 crc kubenswrapper[5016]: W1011 07:42:48.266039 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e94d943_d0bf_4ffc_9109_3d821982dbc6.slice/crio-ed65416b9102c207879a5e37dd07d4f76ffad2ffeac1ee6adde1f9c64ae8fa0b WatchSource:0}: Error finding container ed65416b9102c207879a5e37dd07d4f76ffad2ffeac1ee6adde1f9c64ae8fa0b: Status 404 returned error can't find the container with id ed65416b9102c207879a5e37dd07d4f76ffad2ffeac1ee6adde1f9c64ae8fa0b Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.269213 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.270737 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.770716305 +0000 UTC m=+156.671172251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.371796 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.372285 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.872270871 +0000 UTC m=+156.772726817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.423621 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:48 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:48 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:48 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.423828 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.472631 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.472736 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.972717376 +0000 UTC m=+156.873173322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.472921 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.473256 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:48.97324575 +0000 UTC m=+156.873701696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.573773 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.573953 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.073921221 +0000 UTC m=+156.974377177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.574356 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.574764 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.074748555 +0000 UTC m=+156.975204501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.615098 5016 generic.go:334] "Generic (PLEG): container finished" podID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerID="985a8ca07dbf560d5e8dc027352a1521d59e58a3d34c945fcbf1eb4f9bfd99b2" exitCode=0 Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.615167 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fx6p5" event={"ID":"5e94d943-d0bf-4ffc-9109-3d821982dbc6","Type":"ContainerDied","Data":"985a8ca07dbf560d5e8dc027352a1521d59e58a3d34c945fcbf1eb4f9bfd99b2"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.615194 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fx6p5" event={"ID":"5e94d943-d0bf-4ffc-9109-3d821982dbc6","Type":"ContainerStarted","Data":"ed65416b9102c207879a5e37dd07d4f76ffad2ffeac1ee6adde1f9c64ae8fa0b"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.616644 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.627114 5016 generic.go:334] "Generic (PLEG): container finished" podID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerID="1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6" exitCode=0 Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.627246 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn9bl" event={"ID":"29ad589f-847e-44b2-9c6c-720c6ca1312d","Type":"ContainerDied","Data":"1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.627272 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn9bl" event={"ID":"29ad589f-847e-44b2-9c6c-720c6ca1312d","Type":"ContainerStarted","Data":"9e58e042bb5295867eee9d707c198518fdcd0b27ce048ef800a05e293fe81e1d"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.631921 5016 generic.go:334] "Generic (PLEG): container finished" podID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerID="af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755" exitCode=0 Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.631982 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fmcj" event={"ID":"add7f50e-e0bb-45cb-b76e-c3eec203832b","Type":"ContainerDied","Data":"af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.632008 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fmcj" event={"ID":"add7f50e-e0bb-45cb-b76e-c3eec203832b","Type":"ContainerStarted","Data":"e53e2ae8bcbb811dc75cc75850d64a7e8528a177f4aaa0a23b5bb7def4f23086"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.637902 5016 generic.go:334] "Generic (PLEG): container finished" podID="ef441d82-59b8-4316-8950-b2aea1636de4" containerID="c92a06253242868d41b85662f57f883b3e71c11f0d9b76da47789fbf2b134bb4" exitCode=0 Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.637962 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv6cx" event={"ID":"ef441d82-59b8-4316-8950-b2aea1636de4","Type":"ContainerDied","Data":"c92a06253242868d41b85662f57f883b3e71c11f0d9b76da47789fbf2b134bb4"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.638000 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv6cx" event={"ID":"ef441d82-59b8-4316-8950-b2aea1636de4","Type":"ContainerStarted","Data":"be2ae14c67e7692abf4c920ca3b82ee4d389de1d12826e5fb638dc9359c9189a"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.644060 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b65rs" event={"ID":"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2","Type":"ContainerStarted","Data":"536d97564a3fdb8762a181f512c402ca085635924135ad31467aa02c33edeccb"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.644094 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b65rs" event={"ID":"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2","Type":"ContainerStarted","Data":"069b93da63664103397d052ae0f1d4927741c239ec830ef2464ed21806812675"} Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.644113 5016 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.651602 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.676849 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.677833 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.177816173 +0000 UTC m=+157.078272119 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.778977 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.782149 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.282135908 +0000 UTC m=+157.182591854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.880133 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.880340 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.380314078 +0000 UTC m=+157.280770024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.880738 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.881062 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.38105471 +0000 UTC m=+157.281510656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.932090 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2swk6"] Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.933032 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.936431 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.945011 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2swk6"] Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.982306 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.982436 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.48241751 +0000 UTC m=+157.382873456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.982460 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g94n\" (UniqueName: \"kubernetes.io/projected/d3c23e60-9dde-4c84-859f-60fb9fa03683-kube-api-access-7g94n\") pod \"redhat-marketplace-2swk6\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.982490 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-catalog-content\") pod \"redhat-marketplace-2swk6\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.982546 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:48 crc kubenswrapper[5016]: I1011 07:42:48.982719 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-utilities\") pod \"redhat-marketplace-2swk6\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:48 crc kubenswrapper[5016]: E1011 07:42:48.982779 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.48277121 +0000 UTC m=+157.383227156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.084483 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:49 crc kubenswrapper[5016]: E1011 07:42:49.084682 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.584645685 +0000 UTC m=+157.485101631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.084732 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g94n\" (UniqueName: \"kubernetes.io/projected/d3c23e60-9dde-4c84-859f-60fb9fa03683-kube-api-access-7g94n\") pod \"redhat-marketplace-2swk6\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.084756 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-catalog-content\") pod \"redhat-marketplace-2swk6\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.084802 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.084826 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-utilities\") pod \"redhat-marketplace-2swk6\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:49 crc kubenswrapper[5016]: E1011 07:42:49.085168 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.585147379 +0000 UTC m=+157.485603325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.085308 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-catalog-content\") pod \"redhat-marketplace-2swk6\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.085349 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-utilities\") pod \"redhat-marketplace-2swk6\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.105582 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g94n\" (UniqueName: \"kubernetes.io/projected/d3c23e60-9dde-4c84-859f-60fb9fa03683-kube-api-access-7g94n\") pod \"redhat-marketplace-2swk6\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.186153 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:49 crc kubenswrapper[5016]: E1011 07:42:49.186590 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.686571351 +0000 UTC m=+157.587027307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.255644 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.288220 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:49 crc kubenswrapper[5016]: E1011 07:42:49.288589 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-11 07:42:49.78857161 +0000 UTC m=+157.689027556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sq9kp" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.340541 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z7v6x"] Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.341997 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.360621 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7v6x"] Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.377167 5016 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-11T07:42:48.644133753Z","Handler":null,"Name":""} Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.386936 5016 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.386969 5016 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.390525 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.390816 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-catalog-content\") pod \"redhat-marketplace-z7v6x\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.390878 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psfxg\" (UniqueName: \"kubernetes.io/projected/2e4fa77c-0420-4669-b5df-2601e4ca6404-kube-api-access-psfxg\") pod \"redhat-marketplace-z7v6x\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.390930 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-utilities\") pod \"redhat-marketplace-z7v6x\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.402916 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.424419 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:49 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:49 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:49 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.424472 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.492084 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-utilities\") pod \"redhat-marketplace-z7v6x\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.492143 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-catalog-content\") pod \"redhat-marketplace-z7v6x\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.492194 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psfxg\" (UniqueName: \"kubernetes.io/projected/2e4fa77c-0420-4669-b5df-2601e4ca6404-kube-api-access-psfxg\") pod \"redhat-marketplace-z7v6x\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.492219 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.492899 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-utilities\") pod \"redhat-marketplace-z7v6x\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.493114 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-catalog-content\") pod \"redhat-marketplace-z7v6x\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.499497 5016 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.499548 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.519480 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2swk6"] Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.521870 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psfxg\" (UniqueName: \"kubernetes.io/projected/2e4fa77c-0420-4669-b5df-2601e4ca6404-kube-api-access-psfxg\") pod \"redhat-marketplace-z7v6x\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: W1011 07:42:49.544506 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3c23e60_9dde_4c84_859f_60fb9fa03683.slice/crio-1a2b94579971de93a5f579e7c3150cdb9036874597118cbc3f1342a58a1e3710 WatchSource:0}: Error finding container 1a2b94579971de93a5f579e7c3150cdb9036874597118cbc3f1342a58a1e3710: Status 404 returned error can't find the container with id 1a2b94579971de93a5f579e7c3150cdb9036874597118cbc3f1342a58a1e3710 Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.552616 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sq9kp\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.606380 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.656108 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-b65rs" event={"ID":"319e6ded-fdbf-440a-9bb0-9ef3bfa4f5a2","Type":"ContainerStarted","Data":"e43fbfc176853889dfc6c4114858e3790fc56c18a80c20510066c41c1cad86cd"} Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.665350 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2swk6" event={"ID":"d3c23e60-9dde-4c84-859f-60fb9fa03683","Type":"ContainerStarted","Data":"1a2b94579971de93a5f579e7c3150cdb9036874597118cbc3f1342a58a1e3710"} Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.680772 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-b65rs" podStartSLOduration=11.680736728 podStartE2EDuration="11.680736728s" podCreationTimestamp="2025-10-11 07:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:49.677461424 +0000 UTC m=+157.577917380" watchObservedRunningTime="2025-10-11 07:42:49.680736728 +0000 UTC m=+157.581192674" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.690684 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.909500 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.910638 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.914059 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.916357 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.916471 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.936116 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4ddw2"] Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.946080 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.949489 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 11 07:42:49 crc kubenswrapper[5016]: I1011 07:42:49.955424 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ddw2"] Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.001892 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.001963 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vfq\" (UniqueName: \"kubernetes.io/projected/d4daa5a9-22b7-4859-a375-cb4dec19a7af-kube-api-access-z8vfq\") pod \"redhat-operators-4ddw2\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.002008 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-utilities\") pod \"redhat-operators-4ddw2\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.002043 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.002091 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-catalog-content\") pod \"redhat-operators-4ddw2\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.003944 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sq9kp"] Oct 11 07:42:50 crc kubenswrapper[5016]: W1011 07:42:50.040773 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6fae0ac_5622_48ee_9a1a_3997ee8b57aa.slice/crio-80e789724b857539db35566ea6f10ce947b54ae13c6a4c5a912368a63f9fdac5 WatchSource:0}: Error finding container 80e789724b857539db35566ea6f10ce947b54ae13c6a4c5a912368a63f9fdac5: Status 404 returned error can't find the container with id 80e789724b857539db35566ea6f10ce947b54ae13c6a4c5a912368a63f9fdac5 Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.047432 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7v6x"] Oct 11 07:42:50 crc kubenswrapper[5016]: W1011 07:42:50.064780 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e4fa77c_0420_4669_b5df_2601e4ca6404.slice/crio-b9d32b70509b727621f20821a2cbbd350ad951dd2f9fc378dd0063fceee00e7f WatchSource:0}: Error finding container b9d32b70509b727621f20821a2cbbd350ad951dd2f9fc378dd0063fceee00e7f: Status 404 returned error can't find the container with id b9d32b70509b727621f20821a2cbbd350ad951dd2f9fc378dd0063fceee00e7f Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.103533 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.103582 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8vfq\" (UniqueName: \"kubernetes.io/projected/d4daa5a9-22b7-4859-a375-cb4dec19a7af-kube-api-access-z8vfq\") pod \"redhat-operators-4ddw2\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.103637 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-utilities\") pod \"redhat-operators-4ddw2\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.103681 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.103717 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-catalog-content\") pod \"redhat-operators-4ddw2\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.103982 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.104978 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-catalog-content\") pod \"redhat-operators-4ddw2\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.105135 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-utilities\") pod \"redhat-operators-4ddw2\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.121906 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.124440 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8vfq\" (UniqueName: \"kubernetes.io/projected/d4daa5a9-22b7-4859-a375-cb4dec19a7af-kube-api-access-z8vfq\") pod \"redhat-operators-4ddw2\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.248766 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.271429 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.339905 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hwwjd"] Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.347116 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwwjd"] Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.347251 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.408873 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ns9\" (UniqueName: \"kubernetes.io/projected/06ae2c13-8dcb-4b69-af1b-668fbb548730-kube-api-access-n7ns9\") pod \"redhat-operators-hwwjd\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.408928 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-utilities\") pod \"redhat-operators-hwwjd\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.408979 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-catalog-content\") pod \"redhat-operators-hwwjd\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.420967 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:50 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:50 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:50 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.421271 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.504867 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-vz5gw" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.509908 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-catalog-content\") pod \"redhat-operators-hwwjd\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.509988 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7ns9\" (UniqueName: \"kubernetes.io/projected/06ae2c13-8dcb-4b69-af1b-668fbb548730-kube-api-access-n7ns9\") pod \"redhat-operators-hwwjd\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.510017 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-utilities\") pod \"redhat-operators-hwwjd\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.510546 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-catalog-content\") pod \"redhat-operators-hwwjd\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.510677 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-utilities\") pod \"redhat-operators-hwwjd\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.566369 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7ns9\" (UniqueName: \"kubernetes.io/projected/06ae2c13-8dcb-4b69-af1b-668fbb548730-kube-api-access-n7ns9\") pod \"redhat-operators-hwwjd\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.643545 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.689913 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" event={"ID":"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa","Type":"ContainerStarted","Data":"8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387"} Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.690327 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" event={"ID":"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa","Type":"ContainerStarted","Data":"80e789724b857539db35566ea6f10ce947b54ae13c6a4c5a912368a63f9fdac5"} Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.692152 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.697212 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4ddw2"] Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.704431 5016 generic.go:334] "Generic (PLEG): container finished" podID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerID="0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1" exitCode=0 Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.704756 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7v6x" event={"ID":"2e4fa77c-0420-4669-b5df-2601e4ca6404","Type":"ContainerDied","Data":"0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1"} Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.704794 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7v6x" event={"ID":"2e4fa77c-0420-4669-b5df-2601e4ca6404","Type":"ContainerStarted","Data":"b9d32b70509b727621f20821a2cbbd350ad951dd2f9fc378dd0063fceee00e7f"} Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.708982 5016 generic.go:334] "Generic (PLEG): container finished" podID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerID="f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975" exitCode=0 Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.709375 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2swk6" event={"ID":"d3c23e60-9dde-4c84-859f-60fb9fa03683","Type":"ContainerDied","Data":"f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975"} Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.714391 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" podStartSLOduration=136.714378488 podStartE2EDuration="2m16.714378488s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:50.711029834 +0000 UTC m=+158.611485780" watchObservedRunningTime="2025-10-11 07:42:50.714378488 +0000 UTC m=+158.614834434" Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.714911 5016 generic.go:334] "Generic (PLEG): container finished" podID="a2fca8b5-8ccb-4100-8570-82b07bdae3ee" containerID="75a011fcbc4c849ec1e506fbdc328a7fc66a856e7a8b26e53b7ee3501bef9b13" exitCode=0 Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.716782 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" event={"ID":"a2fca8b5-8ccb-4100-8570-82b07bdae3ee","Type":"ContainerDied","Data":"75a011fcbc4c849ec1e506fbdc328a7fc66a856e7a8b26e53b7ee3501bef9b13"} Oct 11 07:42:50 crc kubenswrapper[5016]: I1011 07:42:50.735410 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:42:50 crc kubenswrapper[5016]: W1011 07:42:50.774207 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4daa5a9_22b7_4859_a375_cb4dec19a7af.slice/crio-f00fb3d2e432608f2349435dfb6fabc3141d271ff765c59a9ba2c8a16fadf515 WatchSource:0}: Error finding container f00fb3d2e432608f2349435dfb6fabc3141d271ff765c59a9ba2c8a16fadf515: Status 404 returned error can't find the container with id f00fb3d2e432608f2349435dfb6fabc3141d271ff765c59a9ba2c8a16fadf515 Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.066373 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwwjd"] Oct 11 07:42:51 crc kubenswrapper[5016]: W1011 07:42:51.110923 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06ae2c13_8dcb_4b69_af1b_668fbb548730.slice/crio-6c90b3d67e2f9a1e0e9bb522dbe55a19bd28f550b5a02db12bca953424121ac4 WatchSource:0}: Error finding container 6c90b3d67e2f9a1e0e9bb522dbe55a19bd28f550b5a02db12bca953424121ac4: Status 404 returned error can't find the container with id 6c90b3d67e2f9a1e0e9bb522dbe55a19bd28f550b5a02db12bca953424121ac4 Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.132566 5016 patch_prober.go:28] interesting pod/downloads-7954f5f757-znwnv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.132627 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-znwnv" podUID="fee3401d-bf88-49cd-b228-a4e89c6dd40e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.132578 5016 patch_prober.go:28] interesting pod/downloads-7954f5f757-znwnv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.132744 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-znwnv" podUID="fee3401d-bf88-49cd-b228-a4e89c6dd40e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.143786 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.150451 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.151079 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.157692 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.310067 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.310116 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.318246 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.426805 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.435924 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:51 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:51 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:51 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.435990 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.737253 5016 generic.go:334] "Generic (PLEG): container finished" podID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerID="3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9" exitCode=0 Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.737319 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ddw2" event={"ID":"d4daa5a9-22b7-4859-a375-cb4dec19a7af","Type":"ContainerDied","Data":"3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9"} Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.737343 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ddw2" event={"ID":"d4daa5a9-22b7-4859-a375-cb4dec19a7af","Type":"ContainerStarted","Data":"f00fb3d2e432608f2349435dfb6fabc3141d271ff765c59a9ba2c8a16fadf515"} Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.739860 5016 generic.go:334] "Generic (PLEG): container finished" podID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerID="08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8" exitCode=0 Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.740071 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwwjd" event={"ID":"06ae2c13-8dcb-4b69-af1b-668fbb548730","Type":"ContainerDied","Data":"08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8"} Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.740118 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwwjd" event={"ID":"06ae2c13-8dcb-4b69-af1b-668fbb548730","Type":"ContainerStarted","Data":"6c90b3d67e2f9a1e0e9bb522dbe55a19bd28f550b5a02db12bca953424121ac4"} Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.744422 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"acba04bc-9a0a-4350-bf3a-0e404e0873d8","Type":"ContainerStarted","Data":"9a8dab5344a37d229ad7edb71feb40a80bee8565c5d2d59c4b134411906902a9"} Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.744462 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"acba04bc-9a0a-4350-bf3a-0e404e0873d8","Type":"ContainerStarted","Data":"bd619da9c73867865c3841ef25ce886508f2518bbdae0d3937f76123005dbc3b"} Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.752696 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-bhwkr" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.754161 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jp4qx" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.780098 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.780739 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.782332 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.782607 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.798778 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.878860 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.87879861 podStartE2EDuration="2.87879861s" podCreationTimestamp="2025-10-11 07:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:51.876868055 +0000 UTC m=+159.777323991" watchObservedRunningTime="2025-10-11 07:42:51.87879861 +0000 UTC m=+159.779254556" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.943773 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:42:51 crc kubenswrapper[5016]: I1011 07:42:51.944939 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.046673 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.046736 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.046844 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.068709 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.115131 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.115409 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.117335 5016 patch_prober.go:28] interesting pod/console-f9d7485db-vmvvh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.117374 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vmvvh" podUID="eb6630cb-0062-4461-bf51-c45f7e4e7478" containerName="console" probeResult="failure" output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.141100 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.355927 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.419867 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:52 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:52 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:52 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.419991 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.453373 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfsxn\" (UniqueName: \"kubernetes.io/projected/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-kube-api-access-lfsxn\") pod \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.453479 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-secret-volume\") pod \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.453536 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-config-volume\") pod \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\" (UID: \"a2fca8b5-8ccb-4100-8570-82b07bdae3ee\") " Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.454485 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "a2fca8b5-8ccb-4100-8570-82b07bdae3ee" (UID: "a2fca8b5-8ccb-4100-8570-82b07bdae3ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.458193 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-kube-api-access-lfsxn" (OuterVolumeSpecName: "kube-api-access-lfsxn") pod "a2fca8b5-8ccb-4100-8570-82b07bdae3ee" (UID: "a2fca8b5-8ccb-4100-8570-82b07bdae3ee"). InnerVolumeSpecName "kube-api-access-lfsxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.469081 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a2fca8b5-8ccb-4100-8570-82b07bdae3ee" (UID: "a2fca8b5-8ccb-4100-8570-82b07bdae3ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.496458 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.555203 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfsxn\" (UniqueName: \"kubernetes.io/projected/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-kube-api-access-lfsxn\") on node \"crc\" DevicePath \"\"" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.555232 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.555244 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fca8b5-8ccb-4100-8570-82b07bdae3ee-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.769431 5016 generic.go:334] "Generic (PLEG): container finished" podID="acba04bc-9a0a-4350-bf3a-0e404e0873d8" containerID="9a8dab5344a37d229ad7edb71feb40a80bee8565c5d2d59c4b134411906902a9" exitCode=0 Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.769495 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"acba04bc-9a0a-4350-bf3a-0e404e0873d8","Type":"ContainerDied","Data":"9a8dab5344a37d229ad7edb71feb40a80bee8565c5d2d59c4b134411906902a9"} Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.794270 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" event={"ID":"a2fca8b5-8ccb-4100-8570-82b07bdae3ee","Type":"ContainerDied","Data":"edebe48bb85259f3d5a9fc452fbc1a3fc4150df3f9b10bafaa39ce32c51559d3"} Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.794295 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.794310 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edebe48bb85259f3d5a9fc452fbc1a3fc4150df3f9b10bafaa39ce32c51559d3" Oct 11 07:42:52 crc kubenswrapper[5016]: I1011 07:42:52.808475 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d5d0511e-bd2e-4744-a435-ed8ef3bbd465","Type":"ContainerStarted","Data":"331f5bfa868b3c99f1d6fb0645f0a438e3120bda24786bbcf346dcb25d79bea0"} Oct 11 07:42:53 crc kubenswrapper[5016]: I1011 07:42:53.419935 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:53 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:53 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:53 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:53 crc kubenswrapper[5016]: I1011 07:42:53.420273 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:53 crc kubenswrapper[5016]: I1011 07:42:53.435261 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-45lst" Oct 11 07:42:53 crc kubenswrapper[5016]: I1011 07:42:53.826203 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d5d0511e-bd2e-4744-a435-ed8ef3bbd465","Type":"ContainerStarted","Data":"9ed634b9006c6be18161c07bc7466e1ec9a119db71b0ffb385a7586d1f45f83e"} Oct 11 07:42:53 crc kubenswrapper[5016]: I1011 07:42:53.838922 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.838903925 podStartE2EDuration="2.838903925s" podCreationTimestamp="2025-10-11 07:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:42:53.837891327 +0000 UTC m=+161.738347273" watchObservedRunningTime="2025-10-11 07:42:53.838903925 +0000 UTC m=+161.739359871" Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.067290 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.189270 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kubelet-dir\") pod \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\" (UID: \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\") " Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.189787 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "acba04bc-9a0a-4350-bf3a-0e404e0873d8" (UID: "acba04bc-9a0a-4350-bf3a-0e404e0873d8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.190766 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kube-api-access\") pod \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\" (UID: \"acba04bc-9a0a-4350-bf3a-0e404e0873d8\") " Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.191042 5016 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.197446 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "acba04bc-9a0a-4350-bf3a-0e404e0873d8" (UID: "acba04bc-9a0a-4350-bf3a-0e404e0873d8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.292329 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acba04bc-9a0a-4350-bf3a-0e404e0873d8-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.423364 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:54 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:54 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:54 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.423420 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.841556 5016 generic.go:334] "Generic (PLEG): container finished" podID="d5d0511e-bd2e-4744-a435-ed8ef3bbd465" containerID="9ed634b9006c6be18161c07bc7466e1ec9a119db71b0ffb385a7586d1f45f83e" exitCode=0 Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.841620 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d5d0511e-bd2e-4744-a435-ed8ef3bbd465","Type":"ContainerDied","Data":"9ed634b9006c6be18161c07bc7466e1ec9a119db71b0ffb385a7586d1f45f83e"} Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.846365 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"acba04bc-9a0a-4350-bf3a-0e404e0873d8","Type":"ContainerDied","Data":"bd619da9c73867865c3841ef25ce886508f2518bbdae0d3937f76123005dbc3b"} Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.846400 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd619da9c73867865c3841ef25ce886508f2518bbdae0d3937f76123005dbc3b" Oct 11 07:42:54 crc kubenswrapper[5016]: I1011 07:42:54.846443 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 11 07:42:55 crc kubenswrapper[5016]: I1011 07:42:55.418452 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:55 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:55 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:55 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:55 crc kubenswrapper[5016]: I1011 07:42:55.418524 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:56 crc kubenswrapper[5016]: I1011 07:42:56.419677 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:56 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:56 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:56 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:56 crc kubenswrapper[5016]: I1011 07:42:56.420008 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:57 crc kubenswrapper[5016]: I1011 07:42:57.419194 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:57 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:57 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:57 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:57 crc kubenswrapper[5016]: I1011 07:42:57.419256 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:57 crc kubenswrapper[5016]: I1011 07:42:57.942046 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:57 crc kubenswrapper[5016]: I1011 07:42:57.954861 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9ceaf34e-81b3-457f-8f03-d807f795392b-metrics-certs\") pod \"network-metrics-daemon-459lg\" (UID: \"9ceaf34e-81b3-457f-8f03-d807f795392b\") " pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:57 crc kubenswrapper[5016]: I1011 07:42:57.970825 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-459lg" Oct 11 07:42:58 crc kubenswrapper[5016]: I1011 07:42:58.420275 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:58 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:58 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:58 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:58 crc kubenswrapper[5016]: I1011 07:42:58.420404 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:42:59 crc kubenswrapper[5016]: I1011 07:42:59.419042 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:42:59 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:42:59 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:42:59 crc kubenswrapper[5016]: healthz check failed Oct 11 07:42:59 crc kubenswrapper[5016]: I1011 07:42:59.419095 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.419036 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:43:00 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:43:00 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:43:00 crc kubenswrapper[5016]: healthz check failed Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.419096 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.888362 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.920330 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d5d0511e-bd2e-4744-a435-ed8ef3bbd465","Type":"ContainerDied","Data":"331f5bfa868b3c99f1d6fb0645f0a438e3120bda24786bbcf346dcb25d79bea0"} Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.920369 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="331f5bfa868b3c99f1d6fb0645f0a438e3120bda24786bbcf346dcb25d79bea0" Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.920418 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.979806 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kube-api-access\") pod \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\" (UID: \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\") " Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.979920 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kubelet-dir\") pod \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\" (UID: \"d5d0511e-bd2e-4744-a435-ed8ef3bbd465\") " Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.980040 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d5d0511e-bd2e-4744-a435-ed8ef3bbd465" (UID: "d5d0511e-bd2e-4744-a435-ed8ef3bbd465"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:43:00 crc kubenswrapper[5016]: I1011 07:43:00.990115 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d5d0511e-bd2e-4744-a435-ed8ef3bbd465" (UID: "d5d0511e-bd2e-4744-a435-ed8ef3bbd465"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:43:01 crc kubenswrapper[5016]: I1011 07:43:01.081515 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:01 crc kubenswrapper[5016]: I1011 07:43:01.081549 5016 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5d0511e-bd2e-4744-a435-ed8ef3bbd465-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:01 crc kubenswrapper[5016]: I1011 07:43:01.139838 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-znwnv" Oct 11 07:43:01 crc kubenswrapper[5016]: I1011 07:43:01.419380 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:43:01 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:43:01 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:43:01 crc kubenswrapper[5016]: healthz check failed Oct 11 07:43:01 crc kubenswrapper[5016]: I1011 07:43:01.419468 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:43:01 crc kubenswrapper[5016]: I1011 07:43:01.823735 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-459lg"] Oct 11 07:43:01 crc kubenswrapper[5016]: W1011 07:43:01.832249 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ceaf34e_81b3_457f_8f03_d807f795392b.slice/crio-c88284c22763495563a64d5bd037b3e7014dea83f5d7e63d1951566ea47bfa02 WatchSource:0}: Error finding container c88284c22763495563a64d5bd037b3e7014dea83f5d7e63d1951566ea47bfa02: Status 404 returned error can't find the container with id c88284c22763495563a64d5bd037b3e7014dea83f5d7e63d1951566ea47bfa02 Oct 11 07:43:01 crc kubenswrapper[5016]: I1011 07:43:01.928979 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-459lg" event={"ID":"9ceaf34e-81b3-457f-8f03-d807f795392b","Type":"ContainerStarted","Data":"c88284c22763495563a64d5bd037b3e7014dea83f5d7e63d1951566ea47bfa02"} Oct 11 07:43:02 crc kubenswrapper[5016]: I1011 07:43:02.116032 5016 patch_prober.go:28] interesting pod/console-f9d7485db-vmvvh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Oct 11 07:43:02 crc kubenswrapper[5016]: I1011 07:43:02.116121 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vmvvh" podUID="eb6630cb-0062-4461-bf51-c45f7e4e7478" containerName="console" probeResult="failure" output="Get \"https://10.217.0.34:8443/health\": dial tcp 10.217.0.34:8443: connect: connection refused" Oct 11 07:43:02 crc kubenswrapper[5016]: I1011 07:43:02.419828 5016 patch_prober.go:28] interesting pod/router-default-5444994796-mn4hd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 11 07:43:02 crc kubenswrapper[5016]: [-]has-synced failed: reason withheld Oct 11 07:43:02 crc kubenswrapper[5016]: [+]process-running ok Oct 11 07:43:02 crc kubenswrapper[5016]: healthz check failed Oct 11 07:43:02 crc kubenswrapper[5016]: I1011 07:43:02.419895 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mn4hd" podUID="76f2e3c8-c16d-4a3e-85d9-25cc30605ea0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 11 07:43:02 crc kubenswrapper[5016]: I1011 07:43:02.960489 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-459lg" event={"ID":"9ceaf34e-81b3-457f-8f03-d807f795392b","Type":"ContainerStarted","Data":"3e94aa73e668a66512f415575c5610b396df141e11504f7ab820e10032d31ff4"} Oct 11 07:43:03 crc kubenswrapper[5016]: I1011 07:43:03.419179 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:43:03 crc kubenswrapper[5016]: I1011 07:43:03.421774 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mn4hd" Oct 11 07:43:07 crc kubenswrapper[5016]: I1011 07:43:07.122641 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:43:07 crc kubenswrapper[5016]: I1011 07:43:07.123077 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:43:09 crc kubenswrapper[5016]: I1011 07:43:09.614445 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:43:12 crc kubenswrapper[5016]: I1011 07:43:12.193589 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:43:12 crc kubenswrapper[5016]: I1011 07:43:12.199918 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:43:21 crc kubenswrapper[5016]: I1011 07:43:21.964081 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-z44tx" Oct 11 07:43:23 crc kubenswrapper[5016]: I1011 07:43:23.417696 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 11 07:43:25 crc kubenswrapper[5016]: E1011 07:43:25.386000 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 11 07:43:25 crc kubenswrapper[5016]: E1011 07:43:25.386195 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hmdwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6fmcj_openshift-marketplace(add7f50e-e0bb-45cb-b76e-c3eec203832b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 11 07:43:25 crc kubenswrapper[5016]: E1011 07:43:25.387559 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6fmcj" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" Oct 11 07:43:26 crc kubenswrapper[5016]: E1011 07:43:26.121942 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6fmcj" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" Oct 11 07:43:26 crc kubenswrapper[5016]: E1011 07:43:26.562409 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 11 07:43:26 crc kubenswrapper[5016]: E1011 07:43:26.562577 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7ns9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hwwjd_openshift-marketplace(06ae2c13-8dcb-4b69-af1b-668fbb548730): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 11 07:43:26 crc kubenswrapper[5016]: E1011 07:43:26.563825 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hwwjd" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.444628 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hwwjd" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.517568 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.517714 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7g94n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2swk6_openshift-marketplace(d3c23e60-9dde-4c84-859f-60fb9fa03683): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.519001 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2swk6" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.583798 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.584522 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-psfxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-z7v6x_openshift-marketplace(2e4fa77c-0420-4669-b5df-2601e4ca6404): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.585724 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-z7v6x" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.588191 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.588317 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8vfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4ddw2_openshift-marketplace(d4daa5a9-22b7-4859-a375-cb4dec19a7af): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 11 07:43:29 crc kubenswrapper[5016]: E1011 07:43:29.589473 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4ddw2" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" Oct 11 07:43:30 crc kubenswrapper[5016]: I1011 07:43:30.111285 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-459lg" event={"ID":"9ceaf34e-81b3-457f-8f03-d807f795392b","Type":"ContainerStarted","Data":"bb12df4fd59a89e1e14203a47e1416faf97f78be8eea59c6621d99acd6df6804"} Oct 11 07:43:30 crc kubenswrapper[5016]: I1011 07:43:30.113429 5016 generic.go:334] "Generic (PLEG): container finished" podID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerID="4eb02f61a19a18d3c664ba3fe1c79081f9a45582d9519632521bbf1a762e3f9e" exitCode=0 Oct 11 07:43:30 crc kubenswrapper[5016]: I1011 07:43:30.113531 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fx6p5" event={"ID":"5e94d943-d0bf-4ffc-9109-3d821982dbc6","Type":"ContainerDied","Data":"4eb02f61a19a18d3c664ba3fe1c79081f9a45582d9519632521bbf1a762e3f9e"} Oct 11 07:43:30 crc kubenswrapper[5016]: I1011 07:43:30.117456 5016 generic.go:334] "Generic (PLEG): container finished" podID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerID="c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653" exitCode=0 Oct 11 07:43:30 crc kubenswrapper[5016]: I1011 07:43:30.117527 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn9bl" event={"ID":"29ad589f-847e-44b2-9c6c-720c6ca1312d","Type":"ContainerDied","Data":"c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653"} Oct 11 07:43:30 crc kubenswrapper[5016]: I1011 07:43:30.120092 5016 generic.go:334] "Generic (PLEG): container finished" podID="ef441d82-59b8-4316-8950-b2aea1636de4" containerID="de1cf2e29799e5e975e46694c756d48115edf370e5eb6b94be2345efff328aea" exitCode=0 Oct 11 07:43:30 crc kubenswrapper[5016]: I1011 07:43:30.121115 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv6cx" event={"ID":"ef441d82-59b8-4316-8950-b2aea1636de4","Type":"ContainerDied","Data":"de1cf2e29799e5e975e46694c756d48115edf370e5eb6b94be2345efff328aea"} Oct 11 07:43:30 crc kubenswrapper[5016]: E1011 07:43:30.121728 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-z7v6x" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" Oct 11 07:43:30 crc kubenswrapper[5016]: E1011 07:43:30.122038 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2swk6" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" Oct 11 07:43:30 crc kubenswrapper[5016]: E1011 07:43:30.122387 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4ddw2" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" Oct 11 07:43:30 crc kubenswrapper[5016]: I1011 07:43:30.126817 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-459lg" podStartSLOduration=176.12680214 podStartE2EDuration="2m56.12680214s" podCreationTimestamp="2025-10-11 07:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:43:30.125266214 +0000 UTC m=+198.025722190" watchObservedRunningTime="2025-10-11 07:43:30.12680214 +0000 UTC m=+198.027258086" Oct 11 07:43:31 crc kubenswrapper[5016]: I1011 07:43:31.128039 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fx6p5" event={"ID":"5e94d943-d0bf-4ffc-9109-3d821982dbc6","Type":"ContainerStarted","Data":"bd0aa14020a5688337184e9d3fecca5e877c7359c3591433e8f252549200cb41"} Oct 11 07:43:31 crc kubenswrapper[5016]: I1011 07:43:31.131840 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn9bl" event={"ID":"29ad589f-847e-44b2-9c6c-720c6ca1312d","Type":"ContainerStarted","Data":"a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d"} Oct 11 07:43:31 crc kubenswrapper[5016]: I1011 07:43:31.146834 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv6cx" event={"ID":"ef441d82-59b8-4316-8950-b2aea1636de4","Type":"ContainerStarted","Data":"9a19e96d91d68176b2db800fcc8d66b9f5ad55fb7c707c775bd7949a11f4d97d"} Oct 11 07:43:31 crc kubenswrapper[5016]: I1011 07:43:31.150332 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fx6p5" podStartSLOduration=2.187556423 podStartE2EDuration="44.150304496s" podCreationTimestamp="2025-10-11 07:42:47 +0000 UTC" firstStartedPulling="2025-10-11 07:42:48.616413151 +0000 UTC m=+156.516869097" lastFinishedPulling="2025-10-11 07:43:30.579161224 +0000 UTC m=+198.479617170" observedRunningTime="2025-10-11 07:43:31.148637487 +0000 UTC m=+199.049093433" watchObservedRunningTime="2025-10-11 07:43:31.150304496 +0000 UTC m=+199.050760442" Oct 11 07:43:31 crc kubenswrapper[5016]: I1011 07:43:31.166552 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gv6cx" podStartSLOduration=2.145375551 podStartE2EDuration="44.166534036s" podCreationTimestamp="2025-10-11 07:42:47 +0000 UTC" firstStartedPulling="2025-10-11 07:42:48.639445101 +0000 UTC m=+156.539901047" lastFinishedPulling="2025-10-11 07:43:30.660603596 +0000 UTC m=+198.561059532" observedRunningTime="2025-10-11 07:43:31.163495516 +0000 UTC m=+199.063951462" watchObservedRunningTime="2025-10-11 07:43:31.166534036 +0000 UTC m=+199.066989982" Oct 11 07:43:31 crc kubenswrapper[5016]: I1011 07:43:31.179054 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jn9bl" podStartSLOduration=3.155293799 podStartE2EDuration="45.179036226s" podCreationTimestamp="2025-10-11 07:42:46 +0000 UTC" firstStartedPulling="2025-10-11 07:42:48.628781669 +0000 UTC m=+156.529237615" lastFinishedPulling="2025-10-11 07:43:30.652524096 +0000 UTC m=+198.552980042" observedRunningTime="2025-10-11 07:43:31.17845352 +0000 UTC m=+199.078909466" watchObservedRunningTime="2025-10-11 07:43:31.179036226 +0000 UTC m=+199.079492172" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.122745 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.123614 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.123739 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.125058 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.125302 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a" gracePeriod=600 Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.143971 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.144035 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.437826 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.489011 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.560239 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.560309 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.606382 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.757249 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.757301 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:43:37 crc kubenswrapper[5016]: I1011 07:43:37.794547 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:43:38 crc kubenswrapper[5016]: I1011 07:43:38.177423 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a" exitCode=0 Oct 11 07:43:38 crc kubenswrapper[5016]: I1011 07:43:38.178464 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a"} Oct 11 07:43:38 crc kubenswrapper[5016]: I1011 07:43:38.178515 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"461462f8d01f467988ce15691a5cd28af5322080f1c3158032b8c6e1ea64bfd3"} Oct 11 07:43:38 crc kubenswrapper[5016]: I1011 07:43:38.237042 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:43:38 crc kubenswrapper[5016]: I1011 07:43:38.237838 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:43:39 crc kubenswrapper[5016]: I1011 07:43:39.469694 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gv6cx"] Oct 11 07:43:40 crc kubenswrapper[5016]: I1011 07:43:40.071026 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fx6p5"] Oct 11 07:43:40 crc kubenswrapper[5016]: I1011 07:43:40.191525 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gv6cx" podUID="ef441d82-59b8-4316-8950-b2aea1636de4" containerName="registry-server" containerID="cri-o://9a19e96d91d68176b2db800fcc8d66b9f5ad55fb7c707c775bd7949a11f4d97d" gracePeriod=2 Oct 11 07:43:41 crc kubenswrapper[5016]: I1011 07:43:41.196563 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fx6p5" podUID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerName="registry-server" containerID="cri-o://bd0aa14020a5688337184e9d3fecca5e877c7359c3591433e8f252549200cb41" gracePeriod=2 Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.221157 5016 generic.go:334] "Generic (PLEG): container finished" podID="ef441d82-59b8-4316-8950-b2aea1636de4" containerID="9a19e96d91d68176b2db800fcc8d66b9f5ad55fb7c707c775bd7949a11f4d97d" exitCode=0 Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.221240 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv6cx" event={"ID":"ef441d82-59b8-4316-8950-b2aea1636de4","Type":"ContainerDied","Data":"9a19e96d91d68176b2db800fcc8d66b9f5ad55fb7c707c775bd7949a11f4d97d"} Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.224166 5016 generic.go:334] "Generic (PLEG): container finished" podID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerID="bd0aa14020a5688337184e9d3fecca5e877c7359c3591433e8f252549200cb41" exitCode=0 Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.224204 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fx6p5" event={"ID":"5e94d943-d0bf-4ffc-9109-3d821982dbc6","Type":"ContainerDied","Data":"bd0aa14020a5688337184e9d3fecca5e877c7359c3591433e8f252549200cb41"} Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.320498 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.337377 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.435569 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-utilities\") pod \"ef441d82-59b8-4316-8950-b2aea1636de4\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.435863 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-catalog-content\") pod \"ef441d82-59b8-4316-8950-b2aea1636de4\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.436132 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnr8x\" (UniqueName: \"kubernetes.io/projected/5e94d943-d0bf-4ffc-9109-3d821982dbc6-kube-api-access-rnr8x\") pod \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.436550 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-utilities\") pod \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.436688 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-catalog-content\") pod \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\" (UID: \"5e94d943-d0bf-4ffc-9109-3d821982dbc6\") " Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.436771 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjr5b\" (UniqueName: \"kubernetes.io/projected/ef441d82-59b8-4316-8950-b2aea1636de4-kube-api-access-fjr5b\") pod \"ef441d82-59b8-4316-8950-b2aea1636de4\" (UID: \"ef441d82-59b8-4316-8950-b2aea1636de4\") " Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.437430 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-utilities" (OuterVolumeSpecName: "utilities") pod "ef441d82-59b8-4316-8950-b2aea1636de4" (UID: "ef441d82-59b8-4316-8950-b2aea1636de4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.437682 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-utilities" (OuterVolumeSpecName: "utilities") pod "5e94d943-d0bf-4ffc-9109-3d821982dbc6" (UID: "5e94d943-d0bf-4ffc-9109-3d821982dbc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.441643 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e94d943-d0bf-4ffc-9109-3d821982dbc6-kube-api-access-rnr8x" (OuterVolumeSpecName: "kube-api-access-rnr8x") pod "5e94d943-d0bf-4ffc-9109-3d821982dbc6" (UID: "5e94d943-d0bf-4ffc-9109-3d821982dbc6"). InnerVolumeSpecName "kube-api-access-rnr8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.453045 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef441d82-59b8-4316-8950-b2aea1636de4-kube-api-access-fjr5b" (OuterVolumeSpecName: "kube-api-access-fjr5b") pod "ef441d82-59b8-4316-8950-b2aea1636de4" (UID: "ef441d82-59b8-4316-8950-b2aea1636de4"). InnerVolumeSpecName "kube-api-access-fjr5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.537874 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.538187 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnr8x\" (UniqueName: \"kubernetes.io/projected/5e94d943-d0bf-4ffc-9109-3d821982dbc6-kube-api-access-rnr8x\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.538198 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.538208 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjr5b\" (UniqueName: \"kubernetes.io/projected/ef441d82-59b8-4316-8950-b2aea1636de4-kube-api-access-fjr5b\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.822563 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef441d82-59b8-4316-8950-b2aea1636de4" (UID: "ef441d82-59b8-4316-8950-b2aea1636de4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.842897 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef441d82-59b8-4316-8950-b2aea1636de4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:43 crc kubenswrapper[5016]: I1011 07:43:43.983603 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e94d943-d0bf-4ffc-9109-3d821982dbc6" (UID: "5e94d943-d0bf-4ffc-9109-3d821982dbc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.045802 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e94d943-d0bf-4ffc-9109-3d821982dbc6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.232254 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fx6p5" event={"ID":"5e94d943-d0bf-4ffc-9109-3d821982dbc6","Type":"ContainerDied","Data":"ed65416b9102c207879a5e37dd07d4f76ffad2ffeac1ee6adde1f9c64ae8fa0b"} Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.232307 5016 scope.go:117] "RemoveContainer" containerID="bd0aa14020a5688337184e9d3fecca5e877c7359c3591433e8f252549200cb41" Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.233395 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fx6p5" Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.235032 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gv6cx" event={"ID":"ef441d82-59b8-4316-8950-b2aea1636de4","Type":"ContainerDied","Data":"be2ae14c67e7692abf4c920ca3b82ee4d389de1d12826e5fb638dc9359c9189a"} Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.235136 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gv6cx" Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.247188 5016 scope.go:117] "RemoveContainer" containerID="4eb02f61a19a18d3c664ba3fe1c79081f9a45582d9519632521bbf1a762e3f9e" Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.273280 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fx6p5"] Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.280562 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fx6p5"] Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.287007 5016 scope.go:117] "RemoveContainer" containerID="985a8ca07dbf560d5e8dc027352a1521d59e58a3d34c945fcbf1eb4f9bfd99b2" Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.291432 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gv6cx"] Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.295951 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gv6cx"] Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.310218 5016 scope.go:117] "RemoveContainer" containerID="9a19e96d91d68176b2db800fcc8d66b9f5ad55fb7c707c775bd7949a11f4d97d" Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.330230 5016 scope.go:117] "RemoveContainer" containerID="de1cf2e29799e5e975e46694c756d48115edf370e5eb6b94be2345efff328aea" Oct 11 07:43:44 crc kubenswrapper[5016]: I1011 07:43:44.348444 5016 scope.go:117] "RemoveContainer" containerID="c92a06253242868d41b85662f57f883b3e71c11f0d9b76da47789fbf2b134bb4" Oct 11 07:43:45 crc kubenswrapper[5016]: I1011 07:43:45.146236 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" path="/var/lib/kubelet/pods/5e94d943-d0bf-4ffc-9109-3d821982dbc6/volumes" Oct 11 07:43:45 crc kubenswrapper[5016]: I1011 07:43:45.148381 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef441d82-59b8-4316-8950-b2aea1636de4" path="/var/lib/kubelet/pods/ef441d82-59b8-4316-8950-b2aea1636de4/volumes" Oct 11 07:43:45 crc kubenswrapper[5016]: I1011 07:43:45.243049 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ddw2" event={"ID":"d4daa5a9-22b7-4859-a375-cb4dec19a7af","Type":"ContainerStarted","Data":"85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724"} Oct 11 07:43:45 crc kubenswrapper[5016]: I1011 07:43:45.246816 5016 generic.go:334] "Generic (PLEG): container finished" podID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerID="bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606" exitCode=0 Oct 11 07:43:45 crc kubenswrapper[5016]: I1011 07:43:45.246890 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwwjd" event={"ID":"06ae2c13-8dcb-4b69-af1b-668fbb548730","Type":"ContainerDied","Data":"bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606"} Oct 11 07:43:45 crc kubenswrapper[5016]: I1011 07:43:45.248581 5016 generic.go:334] "Generic (PLEG): container finished" podID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerID="1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65" exitCode=0 Oct 11 07:43:45 crc kubenswrapper[5016]: I1011 07:43:45.248638 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fmcj" event={"ID":"add7f50e-e0bb-45cb-b76e-c3eec203832b","Type":"ContainerDied","Data":"1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65"} Oct 11 07:43:45 crc kubenswrapper[5016]: I1011 07:43:45.252010 5016 generic.go:334] "Generic (PLEG): container finished" podID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerID="1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50" exitCode=0 Oct 11 07:43:45 crc kubenswrapper[5016]: I1011 07:43:45.252138 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2swk6" event={"ID":"d3c23e60-9dde-4c84-859f-60fb9fa03683","Type":"ContainerDied","Data":"1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50"} Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.258944 5016 generic.go:334] "Generic (PLEG): container finished" podID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerID="85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724" exitCode=0 Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.259027 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ddw2" event={"ID":"d4daa5a9-22b7-4859-a375-cb4dec19a7af","Type":"ContainerDied","Data":"85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724"} Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.264618 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwwjd" event={"ID":"06ae2c13-8dcb-4b69-af1b-668fbb548730","Type":"ContainerStarted","Data":"092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59"} Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.267235 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fmcj" event={"ID":"add7f50e-e0bb-45cb-b76e-c3eec203832b","Type":"ContainerStarted","Data":"019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e"} Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.268908 5016 generic.go:334] "Generic (PLEG): container finished" podID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerID="f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565" exitCode=0 Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.269059 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7v6x" event={"ID":"2e4fa77c-0420-4669-b5df-2601e4ca6404","Type":"ContainerDied","Data":"f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565"} Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.272393 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2swk6" event={"ID":"d3c23e60-9dde-4c84-859f-60fb9fa03683","Type":"ContainerStarted","Data":"7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c"} Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.317758 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hwwjd" podStartSLOduration=2.436620437 podStartE2EDuration="56.317744163s" podCreationTimestamp="2025-10-11 07:42:50 +0000 UTC" firstStartedPulling="2025-10-11 07:42:51.741508594 +0000 UTC m=+159.641964540" lastFinishedPulling="2025-10-11 07:43:45.62263232 +0000 UTC m=+213.523088266" observedRunningTime="2025-10-11 07:43:46.314322962 +0000 UTC m=+214.214778918" watchObservedRunningTime="2025-10-11 07:43:46.317744163 +0000 UTC m=+214.218200109" Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.335951 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6fmcj" podStartSLOduration=3.235637097 podStartE2EDuration="1m0.335934301s" podCreationTimestamp="2025-10-11 07:42:46 +0000 UTC" firstStartedPulling="2025-10-11 07:42:48.636868508 +0000 UTC m=+156.537324454" lastFinishedPulling="2025-10-11 07:43:45.737165712 +0000 UTC m=+213.637621658" observedRunningTime="2025-10-11 07:43:46.330534852 +0000 UTC m=+214.230990798" watchObservedRunningTime="2025-10-11 07:43:46.335934301 +0000 UTC m=+214.236390247" Oct 11 07:43:46 crc kubenswrapper[5016]: I1011 07:43:46.352832 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2swk6" podStartSLOduration=3.457615264 podStartE2EDuration="58.352814041s" podCreationTimestamp="2025-10-11 07:42:48 +0000 UTC" firstStartedPulling="2025-10-11 07:42:50.772556109 +0000 UTC m=+158.673012055" lastFinishedPulling="2025-10-11 07:43:45.667754886 +0000 UTC m=+213.568210832" observedRunningTime="2025-10-11 07:43:46.350490603 +0000 UTC m=+214.250946549" watchObservedRunningTime="2025-10-11 07:43:46.352814041 +0000 UTC m=+214.253269987" Oct 11 07:43:47 crc kubenswrapper[5016]: I1011 07:43:47.264447 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:43:47 crc kubenswrapper[5016]: I1011 07:43:47.264964 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:43:47 crc kubenswrapper[5016]: I1011 07:43:47.301802 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7v6x" event={"ID":"2e4fa77c-0420-4669-b5df-2601e4ca6404","Type":"ContainerStarted","Data":"05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11"} Oct 11 07:43:47 crc kubenswrapper[5016]: I1011 07:43:47.310470 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ddw2" event={"ID":"d4daa5a9-22b7-4859-a375-cb4dec19a7af","Type":"ContainerStarted","Data":"bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956"} Oct 11 07:43:47 crc kubenswrapper[5016]: I1011 07:43:47.329412 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z7v6x" podStartSLOduration=2.381961414 podStartE2EDuration="58.329394989s" podCreationTimestamp="2025-10-11 07:42:49 +0000 UTC" firstStartedPulling="2025-10-11 07:42:50.773232759 +0000 UTC m=+158.673688705" lastFinishedPulling="2025-10-11 07:43:46.720666334 +0000 UTC m=+214.621122280" observedRunningTime="2025-10-11 07:43:47.329021778 +0000 UTC m=+215.229477724" watchObservedRunningTime="2025-10-11 07:43:47.329394989 +0000 UTC m=+215.229850935" Oct 11 07:43:47 crc kubenswrapper[5016]: I1011 07:43:47.345344 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4ddw2" podStartSLOduration=3.326863638 podStartE2EDuration="58.3453285s" podCreationTimestamp="2025-10-11 07:42:49 +0000 UTC" firstStartedPulling="2025-10-11 07:42:51.739987671 +0000 UTC m=+159.640443617" lastFinishedPulling="2025-10-11 07:43:46.758452533 +0000 UTC m=+214.658908479" observedRunningTime="2025-10-11 07:43:47.342465415 +0000 UTC m=+215.242921361" watchObservedRunningTime="2025-10-11 07:43:47.3453285 +0000 UTC m=+215.245784436" Oct 11 07:43:48 crc kubenswrapper[5016]: I1011 07:43:48.313526 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-6fmcj" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerName="registry-server" probeResult="failure" output=< Oct 11 07:43:48 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 07:43:48 crc kubenswrapper[5016]: > Oct 11 07:43:49 crc kubenswrapper[5016]: I1011 07:43:49.256472 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:43:49 crc kubenswrapper[5016]: I1011 07:43:49.256786 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:43:49 crc kubenswrapper[5016]: I1011 07:43:49.298251 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:43:49 crc kubenswrapper[5016]: I1011 07:43:49.691777 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:43:49 crc kubenswrapper[5016]: I1011 07:43:49.692692 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:43:49 crc kubenswrapper[5016]: I1011 07:43:49.733260 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:43:50 crc kubenswrapper[5016]: I1011 07:43:50.272617 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:43:50 crc kubenswrapper[5016]: I1011 07:43:50.272955 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:43:50 crc kubenswrapper[5016]: I1011 07:43:50.736349 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:43:50 crc kubenswrapper[5016]: I1011 07:43:50.736393 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:43:50 crc kubenswrapper[5016]: I1011 07:43:50.779301 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:43:51 crc kubenswrapper[5016]: I1011 07:43:51.317711 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4ddw2" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerName="registry-server" probeResult="failure" output=< Oct 11 07:43:51 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 07:43:51 crc kubenswrapper[5016]: > Oct 11 07:43:51 crc kubenswrapper[5016]: I1011 07:43:51.383424 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:43:52 crc kubenswrapper[5016]: I1011 07:43:52.099187 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mhg9"] Oct 11 07:43:52 crc kubenswrapper[5016]: I1011 07:43:52.267410 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwwjd"] Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.354107 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hwwjd" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerName="registry-server" containerID="cri-o://092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59" gracePeriod=2 Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.788722 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.868157 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7ns9\" (UniqueName: \"kubernetes.io/projected/06ae2c13-8dcb-4b69-af1b-668fbb548730-kube-api-access-n7ns9\") pod \"06ae2c13-8dcb-4b69-af1b-668fbb548730\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.868209 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-utilities\") pod \"06ae2c13-8dcb-4b69-af1b-668fbb548730\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.868266 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-catalog-content\") pod \"06ae2c13-8dcb-4b69-af1b-668fbb548730\" (UID: \"06ae2c13-8dcb-4b69-af1b-668fbb548730\") " Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.869059 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-utilities" (OuterVolumeSpecName: "utilities") pod "06ae2c13-8dcb-4b69-af1b-668fbb548730" (UID: "06ae2c13-8dcb-4b69-af1b-668fbb548730"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.874269 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06ae2c13-8dcb-4b69-af1b-668fbb548730-kube-api-access-n7ns9" (OuterVolumeSpecName: "kube-api-access-n7ns9") pod "06ae2c13-8dcb-4b69-af1b-668fbb548730" (UID: "06ae2c13-8dcb-4b69-af1b-668fbb548730"). InnerVolumeSpecName "kube-api-access-n7ns9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.968092 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06ae2c13-8dcb-4b69-af1b-668fbb548730" (UID: "06ae2c13-8dcb-4b69-af1b-668fbb548730"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.969424 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7ns9\" (UniqueName: \"kubernetes.io/projected/06ae2c13-8dcb-4b69-af1b-668fbb548730-kube-api-access-n7ns9\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.969455 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:53 crc kubenswrapper[5016]: I1011 07:43:53.969466 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06ae2c13-8dcb-4b69-af1b-668fbb548730-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.360975 5016 generic.go:334] "Generic (PLEG): container finished" podID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerID="092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59" exitCode=0 Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.361487 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwwjd" event={"ID":"06ae2c13-8dcb-4b69-af1b-668fbb548730","Type":"ContainerDied","Data":"092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59"} Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.361587 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwwjd" event={"ID":"06ae2c13-8dcb-4b69-af1b-668fbb548730","Type":"ContainerDied","Data":"6c90b3d67e2f9a1e0e9bb522dbe55a19bd28f550b5a02db12bca953424121ac4"} Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.361689 5016 scope.go:117] "RemoveContainer" containerID="092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.361842 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwwjd" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.385158 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwwjd"] Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.391289 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hwwjd"] Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.394472 5016 scope.go:117] "RemoveContainer" containerID="bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.408516 5016 scope.go:117] "RemoveContainer" containerID="08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.422784 5016 scope.go:117] "RemoveContainer" containerID="092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59" Oct 11 07:43:54 crc kubenswrapper[5016]: E1011 07:43:54.423196 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59\": container with ID starting with 092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59 not found: ID does not exist" containerID="092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.423234 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59"} err="failed to get container status \"092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59\": rpc error: code = NotFound desc = could not find container \"092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59\": container with ID starting with 092b9810914645dd43b1bd828c827e005adee84d04169692a076129b9ce3da59 not found: ID does not exist" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.423264 5016 scope.go:117] "RemoveContainer" containerID="bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606" Oct 11 07:43:54 crc kubenswrapper[5016]: E1011 07:43:54.423767 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606\": container with ID starting with bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606 not found: ID does not exist" containerID="bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.423892 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606"} err="failed to get container status \"bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606\": rpc error: code = NotFound desc = could not find container \"bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606\": container with ID starting with bff8bb7ee319fec1b5020b89e55a4565922381ded2bf5a6354d49c74b4f63606 not found: ID does not exist" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.423972 5016 scope.go:117] "RemoveContainer" containerID="08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8" Oct 11 07:43:54 crc kubenswrapper[5016]: E1011 07:43:54.424297 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8\": container with ID starting with 08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8 not found: ID does not exist" containerID="08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8" Oct 11 07:43:54 crc kubenswrapper[5016]: I1011 07:43:54.424317 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8"} err="failed to get container status \"08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8\": rpc error: code = NotFound desc = could not find container \"08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8\": container with ID starting with 08a37896fd418108b2eaa11999dd4b3618cab7c264a063476428f492bc8b24b8 not found: ID does not exist" Oct 11 07:43:55 crc kubenswrapper[5016]: I1011 07:43:55.140448 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" path="/var/lib/kubelet/pods/06ae2c13-8dcb-4b69-af1b-668fbb548730/volumes" Oct 11 07:43:57 crc kubenswrapper[5016]: I1011 07:43:57.312425 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:43:57 crc kubenswrapper[5016]: I1011 07:43:57.348684 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:43:59 crc kubenswrapper[5016]: I1011 07:43:59.293397 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:43:59 crc kubenswrapper[5016]: I1011 07:43:59.742432 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:43:59 crc kubenswrapper[5016]: I1011 07:43:59.869972 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7v6x"] Oct 11 07:44:00 crc kubenswrapper[5016]: I1011 07:44:00.333943 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:44:00 crc kubenswrapper[5016]: I1011 07:44:00.372543 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:44:00 crc kubenswrapper[5016]: I1011 07:44:00.396601 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z7v6x" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerName="registry-server" containerID="cri-o://05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11" gracePeriod=2 Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.303492 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.360829 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-utilities\") pod \"2e4fa77c-0420-4669-b5df-2601e4ca6404\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.360875 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psfxg\" (UniqueName: \"kubernetes.io/projected/2e4fa77c-0420-4669-b5df-2601e4ca6404-kube-api-access-psfxg\") pod \"2e4fa77c-0420-4669-b5df-2601e4ca6404\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.360896 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-catalog-content\") pod \"2e4fa77c-0420-4669-b5df-2601e4ca6404\" (UID: \"2e4fa77c-0420-4669-b5df-2601e4ca6404\") " Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.361788 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-utilities" (OuterVolumeSpecName: "utilities") pod "2e4fa77c-0420-4669-b5df-2601e4ca6404" (UID: "2e4fa77c-0420-4669-b5df-2601e4ca6404"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.365977 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4fa77c-0420-4669-b5df-2601e4ca6404-kube-api-access-psfxg" (OuterVolumeSpecName: "kube-api-access-psfxg") pod "2e4fa77c-0420-4669-b5df-2601e4ca6404" (UID: "2e4fa77c-0420-4669-b5df-2601e4ca6404"). InnerVolumeSpecName "kube-api-access-psfxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.373221 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e4fa77c-0420-4669-b5df-2601e4ca6404" (UID: "2e4fa77c-0420-4669-b5df-2601e4ca6404"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.403339 5016 generic.go:334] "Generic (PLEG): container finished" podID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerID="05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11" exitCode=0 Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.403388 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7v6x" event={"ID":"2e4fa77c-0420-4669-b5df-2601e4ca6404","Type":"ContainerDied","Data":"05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11"} Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.403426 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z7v6x" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.403445 5016 scope.go:117] "RemoveContainer" containerID="05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.403414 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z7v6x" event={"ID":"2e4fa77c-0420-4669-b5df-2601e4ca6404","Type":"ContainerDied","Data":"b9d32b70509b727621f20821a2cbbd350ad951dd2f9fc378dd0063fceee00e7f"} Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.424777 5016 scope.go:117] "RemoveContainer" containerID="f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.442263 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7v6x"] Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.446327 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z7v6x"] Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.456849 5016 scope.go:117] "RemoveContainer" containerID="0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.461857 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.461930 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psfxg\" (UniqueName: \"kubernetes.io/projected/2e4fa77c-0420-4669-b5df-2601e4ca6404-kube-api-access-psfxg\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.461942 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e4fa77c-0420-4669-b5df-2601e4ca6404-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.471458 5016 scope.go:117] "RemoveContainer" containerID="05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11" Oct 11 07:44:01 crc kubenswrapper[5016]: E1011 07:44:01.471860 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11\": container with ID starting with 05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11 not found: ID does not exist" containerID="05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.471905 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11"} err="failed to get container status \"05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11\": rpc error: code = NotFound desc = could not find container \"05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11\": container with ID starting with 05b3e7e8e6d263c6232ebf297162b89ce525b7c6318456e2377d791b08d90a11 not found: ID does not exist" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.471931 5016 scope.go:117] "RemoveContainer" containerID="f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565" Oct 11 07:44:01 crc kubenswrapper[5016]: E1011 07:44:01.472241 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565\": container with ID starting with f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565 not found: ID does not exist" containerID="f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.472303 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565"} err="failed to get container status \"f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565\": rpc error: code = NotFound desc = could not find container \"f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565\": container with ID starting with f42a74cfcf30d7c46b8f0f00a65458ccea90b34a1ccd5dfab991bd10bb8df565 not found: ID does not exist" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.472345 5016 scope.go:117] "RemoveContainer" containerID="0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1" Oct 11 07:44:01 crc kubenswrapper[5016]: E1011 07:44:01.472801 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1\": container with ID starting with 0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1 not found: ID does not exist" containerID="0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1" Oct 11 07:44:01 crc kubenswrapper[5016]: I1011 07:44:01.472855 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1"} err="failed to get container status \"0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1\": rpc error: code = NotFound desc = could not find container \"0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1\": container with ID starting with 0fcdbb081722bbe709909bafd62051fc0fd843db054794630485f240114ec5d1 not found: ID does not exist" Oct 11 07:44:03 crc kubenswrapper[5016]: I1011 07:44:03.143128 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" path="/var/lib/kubelet/pods/2e4fa77c-0420-4669-b5df-2601e4ca6404/volumes" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.143344 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" podUID="0b04b8b6-3686-4217-b79b-374396ed61ec" containerName="oauth-openshift" containerID="cri-o://42e221ea5ed3a479bf210f854386a926095233950ea4e7ab7897a3f477aaeea3" gracePeriod=15 Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.488967 5016 generic.go:334] "Generic (PLEG): container finished" podID="0b04b8b6-3686-4217-b79b-374396ed61ec" containerID="42e221ea5ed3a479bf210f854386a926095233950ea4e7ab7897a3f477aaeea3" exitCode=0 Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.489048 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" event={"ID":"0b04b8b6-3686-4217-b79b-374396ed61ec","Type":"ContainerDied","Data":"42e221ea5ed3a479bf210f854386a926095233950ea4e7ab7897a3f477aaeea3"} Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.517789 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.547822 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-jllt6"] Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548046 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acba04bc-9a0a-4350-bf3a-0e404e0873d8" containerName="pruner" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548059 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="acba04bc-9a0a-4350-bf3a-0e404e0873d8" containerName="pruner" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548071 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerName="extract-content" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548080 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerName="extract-content" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548092 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548099 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548109 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerName="extract-content" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548116 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerName="extract-content" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548125 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fca8b5-8ccb-4100-8570-82b07bdae3ee" containerName="collect-profiles" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548132 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fca8b5-8ccb-4100-8570-82b07bdae3ee" containerName="collect-profiles" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548145 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef441d82-59b8-4316-8950-b2aea1636de4" containerName="extract-utilities" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548154 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef441d82-59b8-4316-8950-b2aea1636de4" containerName="extract-utilities" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548163 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerName="extract-content" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548170 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerName="extract-content" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548178 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerName="extract-utilities" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548185 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerName="extract-utilities" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548196 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerName="extract-utilities" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548204 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerName="extract-utilities" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548214 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5d0511e-bd2e-4744-a435-ed8ef3bbd465" containerName="pruner" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548222 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5d0511e-bd2e-4744-a435-ed8ef3bbd465" containerName="pruner" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548231 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b04b8b6-3686-4217-b79b-374396ed61ec" containerName="oauth-openshift" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548239 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b04b8b6-3686-4217-b79b-374396ed61ec" containerName="oauth-openshift" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548247 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef441d82-59b8-4316-8950-b2aea1636de4" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548252 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef441d82-59b8-4316-8950-b2aea1636de4" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548260 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548266 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548273 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548279 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548287 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef441d82-59b8-4316-8950-b2aea1636de4" containerName="extract-content" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548292 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef441d82-59b8-4316-8950-b2aea1636de4" containerName="extract-content" Oct 11 07:44:17 crc kubenswrapper[5016]: E1011 07:44:17.548300 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerName="extract-utilities" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548306 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerName="extract-utilities" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548412 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fca8b5-8ccb-4100-8570-82b07bdae3ee" containerName="collect-profiles" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548422 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="06ae2c13-8dcb-4b69-af1b-668fbb548730" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548431 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef441d82-59b8-4316-8950-b2aea1636de4" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548440 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="acba04bc-9a0a-4350-bf3a-0e404e0873d8" containerName="pruner" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548449 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5d0511e-bd2e-4744-a435-ed8ef3bbd465" containerName="pruner" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548454 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e94d943-d0bf-4ffc-9109-3d821982dbc6" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548463 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b04b8b6-3686-4217-b79b-374396ed61ec" containerName="oauth-openshift" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548472 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4fa77c-0420-4669-b5df-2601e4ca6404" containerName="registry-server" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.548844 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.559740 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-jllt6"] Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.580683 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-login\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.580822 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5jq2\" (UniqueName: \"kubernetes.io/projected/0b04b8b6-3686-4217-b79b-374396ed61ec-kube-api-access-w5jq2\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.580863 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-provider-selection\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.580896 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-router-certs\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.580970 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-trusted-ca-bundle\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581001 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-dir\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581033 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-error\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581062 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-ocp-branding-template\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581094 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-service-ca\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581129 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-session\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581166 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-serving-cert\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581213 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-idp-0-file-data\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581305 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-cliconfig\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581351 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-policies\") pod \"0b04b8b6-3686-4217-b79b-374396ed61ec\" (UID: \"0b04b8b6-3686-4217-b79b-374396ed61ec\") " Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581574 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52fqh\" (UniqueName: \"kubernetes.io/projected/63ea3ac5-54e2-46bb-ae23-422b09df3c95-kube-api-access-52fqh\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581627 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581694 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581731 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581762 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581787 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581848 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/63ea3ac5-54e2-46bb-ae23-422b09df3c95-audit-dir\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581889 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.581920 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.582018 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.582029 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.582217 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.582345 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-audit-policies\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.582385 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.582450 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.582709 5016 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.583232 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.583254 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.583280 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.583581 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.587325 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.591060 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b04b8b6-3686-4217-b79b-374396ed61ec-kube-api-access-w5jq2" (OuterVolumeSpecName: "kube-api-access-w5jq2") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "kube-api-access-w5jq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.593130 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.593547 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.594145 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.594378 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.594622 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.594814 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.595054 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0b04b8b6-3686-4217-b79b-374396ed61ec" (UID: "0b04b8b6-3686-4217-b79b-374396ed61ec"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.683795 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.683860 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.683903 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-audit-policies\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.683927 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.683966 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.683986 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684002 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52fqh\" (UniqueName: \"kubernetes.io/projected/63ea3ac5-54e2-46bb-ae23-422b09df3c95-kube-api-access-52fqh\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684018 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684042 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684063 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684080 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684108 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/63ea3ac5-54e2-46bb-ae23-422b09df3c95-audit-dir\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684128 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684146 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684184 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684196 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684208 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684219 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684228 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684237 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684249 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684259 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684268 5016 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b04b8b6-3686-4217-b79b-374396ed61ec-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684277 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684286 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684295 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5jq2\" (UniqueName: \"kubernetes.io/projected/0b04b8b6-3686-4217-b79b-374396ed61ec-kube-api-access-w5jq2\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684305 5016 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b04b8b6-3686-4217-b79b-374396ed61ec-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684694 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/63ea3ac5-54e2-46bb-ae23-422b09df3c95-audit-dir\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.684986 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.685105 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.685373 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.685466 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/63ea3ac5-54e2-46bb-ae23-422b09df3c95-audit-policies\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.687749 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-session\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.688714 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.689035 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.689269 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.689945 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-template-login\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.690168 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.690174 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-user-template-error\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.690897 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/63ea3ac5-54e2-46bb-ae23-422b09df3c95-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.700217 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52fqh\" (UniqueName: \"kubernetes.io/projected/63ea3ac5-54e2-46bb-ae23-422b09df3c95-kube-api-access-52fqh\") pod \"oauth-openshift-7f54ff7574-jllt6\" (UID: \"63ea3ac5-54e2-46bb-ae23-422b09df3c95\") " pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:17 crc kubenswrapper[5016]: I1011 07:44:17.863215 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.154258 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f54ff7574-jllt6"] Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.496541 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" event={"ID":"0b04b8b6-3686-4217-b79b-374396ed61ec","Type":"ContainerDied","Data":"247ee07f4d08af680aedc3c4ce74b3b44802a7b8aca50c60142a59695f7b03c4"} Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.496567 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6mhg9" Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.496905 5016 scope.go:117] "RemoveContainer" containerID="42e221ea5ed3a479bf210f854386a926095233950ea4e7ab7897a3f477aaeea3" Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.507371 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" event={"ID":"63ea3ac5-54e2-46bb-ae23-422b09df3c95","Type":"ContainerStarted","Data":"b26fc21586d07173ba74053f159cc7c7cc72fc4021c07b1079fb8b828bc13432"} Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.507413 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" event={"ID":"63ea3ac5-54e2-46bb-ae23-422b09df3c95","Type":"ContainerStarted","Data":"785b21c16a1aa6b4d49341fc41696acf1dbe76f179147d86a98399f4a8e10ba8"} Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.507687 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.508969 5016 patch_prober.go:28] interesting pod/oauth-openshift-7f54ff7574-jllt6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" start-of-body= Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.509008 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" podUID="63ea3ac5-54e2-46bb-ae23-422b09df3c95" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.546310 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" podStartSLOduration=26.54628203 podStartE2EDuration="26.54628203s" podCreationTimestamp="2025-10-11 07:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:44:18.541565201 +0000 UTC m=+246.442021157" watchObservedRunningTime="2025-10-11 07:44:18.54628203 +0000 UTC m=+246.446738016" Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.553301 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mhg9"] Oct 11 07:44:18 crc kubenswrapper[5016]: I1011 07:44:18.561103 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6mhg9"] Oct 11 07:44:19 crc kubenswrapper[5016]: I1011 07:44:19.144889 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b04b8b6-3686-4217-b79b-374396ed61ec" path="/var/lib/kubelet/pods/0b04b8b6-3686-4217-b79b-374396ed61ec/volumes" Oct 11 07:44:19 crc kubenswrapper[5016]: I1011 07:44:19.520875 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f54ff7574-jllt6" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.496640 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fmcj"] Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.497492 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6fmcj" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerName="registry-server" containerID="cri-o://019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e" gracePeriod=30 Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.503639 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn9bl"] Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.503897 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jn9bl" podUID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerName="registry-server" containerID="cri-o://a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d" gracePeriod=30 Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.527227 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bblf"] Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.527913 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" podUID="12dfb419-e03a-48b3-b448-225f83bd8de3" containerName="marketplace-operator" containerID="cri-o://03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03" gracePeriod=30 Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.535152 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2swk6"] Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.535379 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2swk6" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerName="registry-server" containerID="cri-o://7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c" gracePeriod=30 Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.544013 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2d7px"] Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.545093 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.549950 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ddw2"] Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.550192 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4ddw2" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerName="registry-server" containerID="cri-o://bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956" gracePeriod=30 Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.555967 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2d7px"] Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.648313 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlj66\" (UniqueName: \"kubernetes.io/projected/511b8fec-a727-401a-bfe9-8201786f9bea-kube-api-access-dlj66\") pod \"marketplace-operator-79b997595-2d7px\" (UID: \"511b8fec-a727-401a-bfe9-8201786f9bea\") " pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.648446 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/511b8fec-a727-401a-bfe9-8201786f9bea-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2d7px\" (UID: \"511b8fec-a727-401a-bfe9-8201786f9bea\") " pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.648483 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/511b8fec-a727-401a-bfe9-8201786f9bea-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2d7px\" (UID: \"511b8fec-a727-401a-bfe9-8201786f9bea\") " pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.749857 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/511b8fec-a727-401a-bfe9-8201786f9bea-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2d7px\" (UID: \"511b8fec-a727-401a-bfe9-8201786f9bea\") " pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.749920 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/511b8fec-a727-401a-bfe9-8201786f9bea-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2d7px\" (UID: \"511b8fec-a727-401a-bfe9-8201786f9bea\") " pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.749963 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlj66\" (UniqueName: \"kubernetes.io/projected/511b8fec-a727-401a-bfe9-8201786f9bea-kube-api-access-dlj66\") pod \"marketplace-operator-79b997595-2d7px\" (UID: \"511b8fec-a727-401a-bfe9-8201786f9bea\") " pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.751348 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/511b8fec-a727-401a-bfe9-8201786f9bea-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2d7px\" (UID: \"511b8fec-a727-401a-bfe9-8201786f9bea\") " pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.755242 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/511b8fec-a727-401a-bfe9-8201786f9bea-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2d7px\" (UID: \"511b8fec-a727-401a-bfe9-8201786f9bea\") " pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.764434 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlj66\" (UniqueName: \"kubernetes.io/projected/511b8fec-a727-401a-bfe9-8201786f9bea-kube-api-access-dlj66\") pod \"marketplace-operator-79b997595-2d7px\" (UID: \"511b8fec-a727-401a-bfe9-8201786f9bea\") " pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.969369 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.980009 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:44:29 crc kubenswrapper[5016]: I1011 07:44:29.987136 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.002744 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.029064 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.044169 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.053912 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-operator-metrics\") pod \"12dfb419-e03a-48b3-b448-225f83bd8de3\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.053961 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-trusted-ca\") pod \"12dfb419-e03a-48b3-b448-225f83bd8de3\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.053993 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-utilities\") pod \"add7f50e-e0bb-45cb-b76e-c3eec203832b\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.054012 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-utilities\") pod \"d3c23e60-9dde-4c84-859f-60fb9fa03683\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.054047 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrghg\" (UniqueName: \"kubernetes.io/projected/12dfb419-e03a-48b3-b448-225f83bd8de3-kube-api-access-wrghg\") pod \"12dfb419-e03a-48b3-b448-225f83bd8de3\" (UID: \"12dfb419-e03a-48b3-b448-225f83bd8de3\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.054066 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g94n\" (UniqueName: \"kubernetes.io/projected/d3c23e60-9dde-4c84-859f-60fb9fa03683-kube-api-access-7g94n\") pod \"d3c23e60-9dde-4c84-859f-60fb9fa03683\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.054081 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-catalog-content\") pod \"d3c23e60-9dde-4c84-859f-60fb9fa03683\" (UID: \"d3c23e60-9dde-4c84-859f-60fb9fa03683\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.054102 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-catalog-content\") pod \"add7f50e-e0bb-45cb-b76e-c3eec203832b\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.054121 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmdwz\" (UniqueName: \"kubernetes.io/projected/add7f50e-e0bb-45cb-b76e-c3eec203832b-kube-api-access-hmdwz\") pod \"add7f50e-e0bb-45cb-b76e-c3eec203832b\" (UID: \"add7f50e-e0bb-45cb-b76e-c3eec203832b\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.058969 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c23e60-9dde-4c84-859f-60fb9fa03683-kube-api-access-7g94n" (OuterVolumeSpecName: "kube-api-access-7g94n") pod "d3c23e60-9dde-4c84-859f-60fb9fa03683" (UID: "d3c23e60-9dde-4c84-859f-60fb9fa03683"). InnerVolumeSpecName "kube-api-access-7g94n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.059405 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "12dfb419-e03a-48b3-b448-225f83bd8de3" (UID: "12dfb419-e03a-48b3-b448-225f83bd8de3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.060745 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "12dfb419-e03a-48b3-b448-225f83bd8de3" (UID: "12dfb419-e03a-48b3-b448-225f83bd8de3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.058917 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-utilities" (OuterVolumeSpecName: "utilities") pod "add7f50e-e0bb-45cb-b76e-c3eec203832b" (UID: "add7f50e-e0bb-45cb-b76e-c3eec203832b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.061338 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-utilities" (OuterVolumeSpecName: "utilities") pod "d3c23e60-9dde-4c84-859f-60fb9fa03683" (UID: "d3c23e60-9dde-4c84-859f-60fb9fa03683"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.062465 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12dfb419-e03a-48b3-b448-225f83bd8de3-kube-api-access-wrghg" (OuterVolumeSpecName: "kube-api-access-wrghg") pod "12dfb419-e03a-48b3-b448-225f83bd8de3" (UID: "12dfb419-e03a-48b3-b448-225f83bd8de3"). InnerVolumeSpecName "kube-api-access-wrghg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.081341 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add7f50e-e0bb-45cb-b76e-c3eec203832b-kube-api-access-hmdwz" (OuterVolumeSpecName: "kube-api-access-hmdwz") pod "add7f50e-e0bb-45cb-b76e-c3eec203832b" (UID: "add7f50e-e0bb-45cb-b76e-c3eec203832b"). InnerVolumeSpecName "kube-api-access-hmdwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.086164 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3c23e60-9dde-4c84-859f-60fb9fa03683" (UID: "d3c23e60-9dde-4c84-859f-60fb9fa03683"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.148153 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "add7f50e-e0bb-45cb-b76e-c3eec203832b" (UID: "add7f50e-e0bb-45cb-b76e-c3eec203832b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155194 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-utilities\") pod \"29ad589f-847e-44b2-9c6c-720c6ca1312d\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155250 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqkjf\" (UniqueName: \"kubernetes.io/projected/29ad589f-847e-44b2-9c6c-720c6ca1312d-kube-api-access-jqkjf\") pod \"29ad589f-847e-44b2-9c6c-720c6ca1312d\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155278 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-utilities\") pod \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155314 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-catalog-content\") pod \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155354 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-catalog-content\") pod \"29ad589f-847e-44b2-9c6c-720c6ca1312d\" (UID: \"29ad589f-847e-44b2-9c6c-720c6ca1312d\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155381 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8vfq\" (UniqueName: \"kubernetes.io/projected/d4daa5a9-22b7-4859-a375-cb4dec19a7af-kube-api-access-z8vfq\") pod \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\" (UID: \"d4daa5a9-22b7-4859-a375-cb4dec19a7af\") " Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155556 5016 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155571 5016 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/12dfb419-e03a-48b3-b448-225f83bd8de3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155580 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155588 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155596 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrghg\" (UniqueName: \"kubernetes.io/projected/12dfb419-e03a-48b3-b448-225f83bd8de3-kube-api-access-wrghg\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155604 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g94n\" (UniqueName: \"kubernetes.io/projected/d3c23e60-9dde-4c84-859f-60fb9fa03683-kube-api-access-7g94n\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155612 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3c23e60-9dde-4c84-859f-60fb9fa03683-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155619 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/add7f50e-e0bb-45cb-b76e-c3eec203832b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.155629 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmdwz\" (UniqueName: \"kubernetes.io/projected/add7f50e-e0bb-45cb-b76e-c3eec203832b-kube-api-access-hmdwz\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.156580 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-utilities" (OuterVolumeSpecName: "utilities") pod "d4daa5a9-22b7-4859-a375-cb4dec19a7af" (UID: "d4daa5a9-22b7-4859-a375-cb4dec19a7af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.156808 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-utilities" (OuterVolumeSpecName: "utilities") pod "29ad589f-847e-44b2-9c6c-720c6ca1312d" (UID: "29ad589f-847e-44b2-9c6c-720c6ca1312d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.159528 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4daa5a9-22b7-4859-a375-cb4dec19a7af-kube-api-access-z8vfq" (OuterVolumeSpecName: "kube-api-access-z8vfq") pod "d4daa5a9-22b7-4859-a375-cb4dec19a7af" (UID: "d4daa5a9-22b7-4859-a375-cb4dec19a7af"). InnerVolumeSpecName "kube-api-access-z8vfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.160166 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ad589f-847e-44b2-9c6c-720c6ca1312d-kube-api-access-jqkjf" (OuterVolumeSpecName: "kube-api-access-jqkjf") pod "29ad589f-847e-44b2-9c6c-720c6ca1312d" (UID: "29ad589f-847e-44b2-9c6c-720c6ca1312d"). InnerVolumeSpecName "kube-api-access-jqkjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.211306 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29ad589f-847e-44b2-9c6c-720c6ca1312d" (UID: "29ad589f-847e-44b2-9c6c-720c6ca1312d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.246366 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4daa5a9-22b7-4859-a375-cb4dec19a7af" (UID: "d4daa5a9-22b7-4859-a375-cb4dec19a7af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.257073 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.257102 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.257112 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8vfq\" (UniqueName: \"kubernetes.io/projected/d4daa5a9-22b7-4859-a375-cb4dec19a7af-kube-api-access-z8vfq\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.257123 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29ad589f-847e-44b2-9c6c-720c6ca1312d-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.257132 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqkjf\" (UniqueName: \"kubernetes.io/projected/29ad589f-847e-44b2-9c6c-720c6ca1312d-kube-api-access-jqkjf\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.257140 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4daa5a9-22b7-4859-a375-cb4dec19a7af-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.409510 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2d7px"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.574907 5016 generic.go:334] "Generic (PLEG): container finished" podID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerID="7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c" exitCode=0 Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.574984 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2swk6" event={"ID":"d3c23e60-9dde-4c84-859f-60fb9fa03683","Type":"ContainerDied","Data":"7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.574929 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2swk6" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.575102 5016 scope.go:117] "RemoveContainer" containerID="7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.575081 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2swk6" event={"ID":"d3c23e60-9dde-4c84-859f-60fb9fa03683","Type":"ContainerDied","Data":"1a2b94579971de93a5f579e7c3150cdb9036874597118cbc3f1342a58a1e3710"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.578129 5016 generic.go:334] "Generic (PLEG): container finished" podID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerID="bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956" exitCode=0 Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.578199 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ddw2" event={"ID":"d4daa5a9-22b7-4859-a375-cb4dec19a7af","Type":"ContainerDied","Data":"bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.578225 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4ddw2" event={"ID":"d4daa5a9-22b7-4859-a375-cb4dec19a7af","Type":"ContainerDied","Data":"f00fb3d2e432608f2349435dfb6fabc3141d271ff765c59a9ba2c8a16fadf515"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.578290 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4ddw2" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.580289 5016 generic.go:334] "Generic (PLEG): container finished" podID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerID="a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d" exitCode=0 Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.580330 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jn9bl" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.580415 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn9bl" event={"ID":"29ad589f-847e-44b2-9c6c-720c6ca1312d","Type":"ContainerDied","Data":"a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.580462 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jn9bl" event={"ID":"29ad589f-847e-44b2-9c6c-720c6ca1312d","Type":"ContainerDied","Data":"9e58e042bb5295867eee9d707c198518fdcd0b27ce048ef800a05e293fe81e1d"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.586574 5016 generic.go:334] "Generic (PLEG): container finished" podID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerID="019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e" exitCode=0 Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.586694 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fmcj" event={"ID":"add7f50e-e0bb-45cb-b76e-c3eec203832b","Type":"ContainerDied","Data":"019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.586713 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6fmcj" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.586739 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6fmcj" event={"ID":"add7f50e-e0bb-45cb-b76e-c3eec203832b","Type":"ContainerDied","Data":"e53e2ae8bcbb811dc75cc75850d64a7e8528a177f4aaa0a23b5bb7def4f23086"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.588068 5016 generic.go:334] "Generic (PLEG): container finished" podID="12dfb419-e03a-48b3-b448-225f83bd8de3" containerID="03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03" exitCode=0 Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.588137 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" event={"ID":"12dfb419-e03a-48b3-b448-225f83bd8de3","Type":"ContainerDied","Data":"03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.588162 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" event={"ID":"12dfb419-e03a-48b3-b448-225f83bd8de3","Type":"ContainerDied","Data":"42a8dea694e8589b0d5a930d61dec7ec4a6b5e4807c2c4a31dc67cf026e58054"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.588208 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5bblf" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.591829 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" event={"ID":"511b8fec-a727-401a-bfe9-8201786f9bea","Type":"ContainerStarted","Data":"e927e877712b56cb9790fe8ffbe6701c51bfd1471d889c31e338df0557fd64c7"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.591901 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" event={"ID":"511b8fec-a727-401a-bfe9-8201786f9bea","Type":"ContainerStarted","Data":"cd299a28d3165e5f1a24f3d25f321e922bdd645c68000ba3ae8b1b65c0a848e3"} Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.594526 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.598188 5016 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2d7px container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.598256 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" podUID="511b8fec-a727-401a-bfe9-8201786f9bea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.609039 5016 scope.go:117] "RemoveContainer" containerID="1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.619588 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" podStartSLOduration=1.619562428 podStartE2EDuration="1.619562428s" podCreationTimestamp="2025-10-11 07:44:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:44:30.613732846 +0000 UTC m=+258.514188812" watchObservedRunningTime="2025-10-11 07:44:30.619562428 +0000 UTC m=+258.520018374" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.629527 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jn9bl"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.633218 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jn9bl"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.660682 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4ddw2"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.665309 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4ddw2"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.672561 5016 scope.go:117] "RemoveContainer" containerID="f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.700041 5016 scope.go:117] "RemoveContainer" containerID="7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.700502 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c\": container with ID starting with 7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c not found: ID does not exist" containerID="7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.701513 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c"} err="failed to get container status \"7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c\": rpc error: code = NotFound desc = could not find container \"7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c\": container with ID starting with 7674b2c00eacbd52953aa84fc6e421052a38efe416f8c7d5539546391b77e41c not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.701558 5016 scope.go:117] "RemoveContainer" containerID="1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.703599 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50\": container with ID starting with 1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50 not found: ID does not exist" containerID="1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.703702 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50"} err="failed to get container status \"1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50\": rpc error: code = NotFound desc = could not find container \"1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50\": container with ID starting with 1d8788bfc16d640a4438d74660fa6ad878d9d2764800aa3deb11afede711ee50 not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.703747 5016 scope.go:117] "RemoveContainer" containerID="f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.705700 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975\": container with ID starting with f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975 not found: ID does not exist" containerID="f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.705756 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975"} err="failed to get container status \"f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975\": rpc error: code = NotFound desc = could not find container \"f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975\": container with ID starting with f79d0a91befc0ed25e1a2aab94ec7f3f932cbccc4c3c1ac7248bc4e67a650975 not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.705799 5016 scope.go:117] "RemoveContainer" containerID="bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.706335 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2swk6"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.715246 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2swk6"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.720888 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6fmcj"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.724322 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6fmcj"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.730159 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bblf"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.732685 5016 scope.go:117] "RemoveContainer" containerID="85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.733048 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5bblf"] Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.746967 5016 scope.go:117] "RemoveContainer" containerID="3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.760878 5016 scope.go:117] "RemoveContainer" containerID="bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.761225 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956\": container with ID starting with bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956 not found: ID does not exist" containerID="bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.761256 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956"} err="failed to get container status \"bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956\": rpc error: code = NotFound desc = could not find container \"bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956\": container with ID starting with bb8f362ddab8702c78697e20a85231133687cf8cd858a2ca1076291e32836956 not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.761279 5016 scope.go:117] "RemoveContainer" containerID="85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.761590 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724\": container with ID starting with 85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724 not found: ID does not exist" containerID="85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.761612 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724"} err="failed to get container status \"85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724\": rpc error: code = NotFound desc = could not find container \"85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724\": container with ID starting with 85a1e81ace2db3d9db0add4f433050c109b203d139e2592d5a4202f7d2d66724 not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.761628 5016 scope.go:117] "RemoveContainer" containerID="3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.761945 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9\": container with ID starting with 3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9 not found: ID does not exist" containerID="3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.762002 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9"} err="failed to get container status \"3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9\": rpc error: code = NotFound desc = could not find container \"3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9\": container with ID starting with 3946314fa8be3adfe2e0e49a68bbe8d4f50db7c05cdce8af13355145e7947be9 not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.762045 5016 scope.go:117] "RemoveContainer" containerID="a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.777831 5016 scope.go:117] "RemoveContainer" containerID="c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.794065 5016 scope.go:117] "RemoveContainer" containerID="1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.811329 5016 scope.go:117] "RemoveContainer" containerID="a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.813941 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d\": container with ID starting with a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d not found: ID does not exist" containerID="a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.813992 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d"} err="failed to get container status \"a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d\": rpc error: code = NotFound desc = could not find container \"a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d\": container with ID starting with a368f60d9de19154b1bcc121704979fb94aefa1a9686e8b560603698f014992d not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.814029 5016 scope.go:117] "RemoveContainer" containerID="c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.814454 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653\": container with ID starting with c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653 not found: ID does not exist" containerID="c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.814488 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653"} err="failed to get container status \"c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653\": rpc error: code = NotFound desc = could not find container \"c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653\": container with ID starting with c94d318a872d193e68e4f3a5ff0084cb52add58b8201b7c4fff5a2bf4af76653 not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.814509 5016 scope.go:117] "RemoveContainer" containerID="1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.815055 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6\": container with ID starting with 1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6 not found: ID does not exist" containerID="1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.815094 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6"} err="failed to get container status \"1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6\": rpc error: code = NotFound desc = could not find container \"1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6\": container with ID starting with 1f468741f7b0a0dd4d36d08e0983f95b8f6cde1a5ad3b231546bf1e6a42b57e6 not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.815122 5016 scope.go:117] "RemoveContainer" containerID="019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.841241 5016 scope.go:117] "RemoveContainer" containerID="1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.856910 5016 scope.go:117] "RemoveContainer" containerID="af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.872061 5016 scope.go:117] "RemoveContainer" containerID="019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.872497 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e\": container with ID starting with 019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e not found: ID does not exist" containerID="019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.872524 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e"} err="failed to get container status \"019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e\": rpc error: code = NotFound desc = could not find container \"019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e\": container with ID starting with 019e1e74d232bfd50b75c0b43ed7f4ad9e7fe299d99e30896addf8d0b9b0854e not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.872550 5016 scope.go:117] "RemoveContainer" containerID="1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.873087 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65\": container with ID starting with 1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65 not found: ID does not exist" containerID="1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.873119 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65"} err="failed to get container status \"1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65\": rpc error: code = NotFound desc = could not find container \"1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65\": container with ID starting with 1af097086c359a5810eac277987d68bdf599bed04b25fd05c3ba5bb6249d1e65 not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.873133 5016 scope.go:117] "RemoveContainer" containerID="af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.873418 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755\": container with ID starting with af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755 not found: ID does not exist" containerID="af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.873459 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755"} err="failed to get container status \"af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755\": rpc error: code = NotFound desc = could not find container \"af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755\": container with ID starting with af847191aad4271319b6dc824960ed6aefdd72abb0d3e0d473e2b5c8cca57755 not found: ID does not exist" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.873478 5016 scope.go:117] "RemoveContainer" containerID="03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.899634 5016 scope.go:117] "RemoveContainer" containerID="03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03" Oct 11 07:44:30 crc kubenswrapper[5016]: E1011 07:44:30.900081 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03\": container with ID starting with 03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03 not found: ID does not exist" containerID="03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03" Oct 11 07:44:30 crc kubenswrapper[5016]: I1011 07:44:30.900111 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03"} err="failed to get container status \"03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03\": rpc error: code = NotFound desc = could not find container \"03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03\": container with ID starting with 03752e5912ab56cac185da3955a38991080cc2e9f80aa4eced07a6bfa3ce2a03 not found: ID does not exist" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.140041 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12dfb419-e03a-48b3-b448-225f83bd8de3" path="/var/lib/kubelet/pods/12dfb419-e03a-48b3-b448-225f83bd8de3/volumes" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.140510 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ad589f-847e-44b2-9c6c-720c6ca1312d" path="/var/lib/kubelet/pods/29ad589f-847e-44b2-9c6c-720c6ca1312d/volumes" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.141107 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" path="/var/lib/kubelet/pods/add7f50e-e0bb-45cb-b76e-c3eec203832b/volumes" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.142134 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" path="/var/lib/kubelet/pods/d3c23e60-9dde-4c84-859f-60fb9fa03683/volumes" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.142967 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" path="/var/lib/kubelet/pods/d4daa5a9-22b7-4859-a375-cb4dec19a7af/volumes" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.308769 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6ccc"] Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.308980 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.308995 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309008 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerName="extract-utilities" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309017 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerName="extract-utilities" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309027 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerName="extract-utilities" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309034 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerName="extract-utilities" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309056 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309064 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309079 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dfb419-e03a-48b3-b448-225f83bd8de3" containerName="marketplace-operator" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309087 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dfb419-e03a-48b3-b448-225f83bd8de3" containerName="marketplace-operator" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309097 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerName="extract-content" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309104 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerName="extract-content" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309115 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerName="extract-content" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309122 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerName="extract-content" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309132 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerName="extract-utilities" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309139 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerName="extract-utilities" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309151 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerName="extract-utilities" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309158 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerName="extract-utilities" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309169 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerName="extract-content" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309177 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerName="extract-content" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309187 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309195 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309206 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerName="extract-content" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309213 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerName="extract-content" Oct 11 07:44:31 crc kubenswrapper[5016]: E1011 07:44:31.309224 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309232 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309333 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dfb419-e03a-48b3-b448-225f83bd8de3" containerName="marketplace-operator" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309346 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c23e60-9dde-4c84-859f-60fb9fa03683" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309358 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="add7f50e-e0bb-45cb-b76e-c3eec203832b" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309372 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4daa5a9-22b7-4859-a375-cb4dec19a7af" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.309382 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ad589f-847e-44b2-9c6c-720c6ca1312d" containerName="registry-server" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.310260 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.312087 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.319456 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6ccc"] Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.396352 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6fzz\" (UniqueName: \"kubernetes.io/projected/bbca8383-9f95-40bc-be54-9954ad04c402-kube-api-access-p6fzz\") pod \"certified-operators-b6ccc\" (UID: \"bbca8383-9f95-40bc-be54-9954ad04c402\") " pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.396437 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbca8383-9f95-40bc-be54-9954ad04c402-catalog-content\") pod \"certified-operators-b6ccc\" (UID: \"bbca8383-9f95-40bc-be54-9954ad04c402\") " pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.396476 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbca8383-9f95-40bc-be54-9954ad04c402-utilities\") pod \"certified-operators-b6ccc\" (UID: \"bbca8383-9f95-40bc-be54-9954ad04c402\") " pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.497321 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbca8383-9f95-40bc-be54-9954ad04c402-catalog-content\") pod \"certified-operators-b6ccc\" (UID: \"bbca8383-9f95-40bc-be54-9954ad04c402\") " pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.497378 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbca8383-9f95-40bc-be54-9954ad04c402-utilities\") pod \"certified-operators-b6ccc\" (UID: \"bbca8383-9f95-40bc-be54-9954ad04c402\") " pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.497433 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6fzz\" (UniqueName: \"kubernetes.io/projected/bbca8383-9f95-40bc-be54-9954ad04c402-kube-api-access-p6fzz\") pod \"certified-operators-b6ccc\" (UID: \"bbca8383-9f95-40bc-be54-9954ad04c402\") " pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.498054 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbca8383-9f95-40bc-be54-9954ad04c402-catalog-content\") pod \"certified-operators-b6ccc\" (UID: \"bbca8383-9f95-40bc-be54-9954ad04c402\") " pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.498135 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbca8383-9f95-40bc-be54-9954ad04c402-utilities\") pod \"certified-operators-b6ccc\" (UID: \"bbca8383-9f95-40bc-be54-9954ad04c402\") " pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.514474 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6fzz\" (UniqueName: \"kubernetes.io/projected/bbca8383-9f95-40bc-be54-9954ad04c402-kube-api-access-p6fzz\") pod \"certified-operators-b6ccc\" (UID: \"bbca8383-9f95-40bc-be54-9954ad04c402\") " pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.602585 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2d7px" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.629079 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:31 crc kubenswrapper[5016]: I1011 07:44:31.805859 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6ccc"] Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.308001 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9kxx6"] Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.309457 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.311471 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.316905 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kxx6"] Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.408011 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d69861-d719-49f6-92de-8cc49752f215-utilities\") pod \"redhat-marketplace-9kxx6\" (UID: \"97d69861-d719-49f6-92de-8cc49752f215\") " pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.408057 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d69861-d719-49f6-92de-8cc49752f215-catalog-content\") pod \"redhat-marketplace-9kxx6\" (UID: \"97d69861-d719-49f6-92de-8cc49752f215\") " pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.408107 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhd4\" (UniqueName: \"kubernetes.io/projected/97d69861-d719-49f6-92de-8cc49752f215-kube-api-access-pqhd4\") pod \"redhat-marketplace-9kxx6\" (UID: \"97d69861-d719-49f6-92de-8cc49752f215\") " pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.509085 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d69861-d719-49f6-92de-8cc49752f215-utilities\") pod \"redhat-marketplace-9kxx6\" (UID: \"97d69861-d719-49f6-92de-8cc49752f215\") " pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.509147 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d69861-d719-49f6-92de-8cc49752f215-catalog-content\") pod \"redhat-marketplace-9kxx6\" (UID: \"97d69861-d719-49f6-92de-8cc49752f215\") " pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.509238 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqhd4\" (UniqueName: \"kubernetes.io/projected/97d69861-d719-49f6-92de-8cc49752f215-kube-api-access-pqhd4\") pod \"redhat-marketplace-9kxx6\" (UID: \"97d69861-d719-49f6-92de-8cc49752f215\") " pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.509806 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d69861-d719-49f6-92de-8cc49752f215-catalog-content\") pod \"redhat-marketplace-9kxx6\" (UID: \"97d69861-d719-49f6-92de-8cc49752f215\") " pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.510101 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d69861-d719-49f6-92de-8cc49752f215-utilities\") pod \"redhat-marketplace-9kxx6\" (UID: \"97d69861-d719-49f6-92de-8cc49752f215\") " pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.529051 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqhd4\" (UniqueName: \"kubernetes.io/projected/97d69861-d719-49f6-92de-8cc49752f215-kube-api-access-pqhd4\") pod \"redhat-marketplace-9kxx6\" (UID: \"97d69861-d719-49f6-92de-8cc49752f215\") " pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.606898 5016 generic.go:334] "Generic (PLEG): container finished" podID="bbca8383-9f95-40bc-be54-9954ad04c402" containerID="928740b51c2c6b730656ba31ec8d3b614015336224a92efb6514ec744d1f7ac9" exitCode=0 Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.607010 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ccc" event={"ID":"bbca8383-9f95-40bc-be54-9954ad04c402","Type":"ContainerDied","Data":"928740b51c2c6b730656ba31ec8d3b614015336224a92efb6514ec744d1f7ac9"} Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.607039 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ccc" event={"ID":"bbca8383-9f95-40bc-be54-9954ad04c402","Type":"ContainerStarted","Data":"f44f25f2ee0d79c5b0189b6fde5203ac079fb58dcf2d02b0c8492c3593243394"} Oct 11 07:44:32 crc kubenswrapper[5016]: I1011 07:44:32.635290 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.042264 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9kxx6"] Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.613761 5016 generic.go:334] "Generic (PLEG): container finished" podID="97d69861-d719-49f6-92de-8cc49752f215" containerID="25a58cd30f81371f7c8fc3c299f8fa8740ffe019fcf2874b2146c630f49b7e52" exitCode=0 Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.613884 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kxx6" event={"ID":"97d69861-d719-49f6-92de-8cc49752f215","Type":"ContainerDied","Data":"25a58cd30f81371f7c8fc3c299f8fa8740ffe019fcf2874b2146c630f49b7e52"} Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.614200 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kxx6" event={"ID":"97d69861-d719-49f6-92de-8cc49752f215","Type":"ContainerStarted","Data":"183dd8569d660566bfa19813b2a10e20b1a4b3616698191514ec138065fd5df1"} Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.620573 5016 generic.go:334] "Generic (PLEG): container finished" podID="bbca8383-9f95-40bc-be54-9954ad04c402" containerID="85bf99a992c999b7466ab591679f3b75049a1224d807bd85c6fb64d9f84e6559" exitCode=0 Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.620606 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ccc" event={"ID":"bbca8383-9f95-40bc-be54-9954ad04c402","Type":"ContainerDied","Data":"85bf99a992c999b7466ab591679f3b75049a1224d807bd85c6fb64d9f84e6559"} Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.716340 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7c7l"] Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.721127 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.723865 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.726729 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7c7l"] Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.824004 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd1f27a8-8756-4d42-9894-3e7fa9107b44-catalog-content\") pod \"redhat-operators-x7c7l\" (UID: \"cd1f27a8-8756-4d42-9894-3e7fa9107b44\") " pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.824073 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xxq5\" (UniqueName: \"kubernetes.io/projected/cd1f27a8-8756-4d42-9894-3e7fa9107b44-kube-api-access-4xxq5\") pod \"redhat-operators-x7c7l\" (UID: \"cd1f27a8-8756-4d42-9894-3e7fa9107b44\") " pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.824167 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd1f27a8-8756-4d42-9894-3e7fa9107b44-utilities\") pod \"redhat-operators-x7c7l\" (UID: \"cd1f27a8-8756-4d42-9894-3e7fa9107b44\") " pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.925358 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd1f27a8-8756-4d42-9894-3e7fa9107b44-utilities\") pod \"redhat-operators-x7c7l\" (UID: \"cd1f27a8-8756-4d42-9894-3e7fa9107b44\") " pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.925919 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd1f27a8-8756-4d42-9894-3e7fa9107b44-catalog-content\") pod \"redhat-operators-x7c7l\" (UID: \"cd1f27a8-8756-4d42-9894-3e7fa9107b44\") " pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.926009 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xxq5\" (UniqueName: \"kubernetes.io/projected/cd1f27a8-8756-4d42-9894-3e7fa9107b44-kube-api-access-4xxq5\") pod \"redhat-operators-x7c7l\" (UID: \"cd1f27a8-8756-4d42-9894-3e7fa9107b44\") " pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.926405 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd1f27a8-8756-4d42-9894-3e7fa9107b44-utilities\") pod \"redhat-operators-x7c7l\" (UID: \"cd1f27a8-8756-4d42-9894-3e7fa9107b44\") " pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.926516 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd1f27a8-8756-4d42-9894-3e7fa9107b44-catalog-content\") pod \"redhat-operators-x7c7l\" (UID: \"cd1f27a8-8756-4d42-9894-3e7fa9107b44\") " pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:33 crc kubenswrapper[5016]: I1011 07:44:33.948500 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xxq5\" (UniqueName: \"kubernetes.io/projected/cd1f27a8-8756-4d42-9894-3e7fa9107b44-kube-api-access-4xxq5\") pod \"redhat-operators-x7c7l\" (UID: \"cd1f27a8-8756-4d42-9894-3e7fa9107b44\") " pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.036680 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.427048 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7c7l"] Oct 11 07:44:34 crc kubenswrapper[5016]: W1011 07:44:34.437419 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd1f27a8_8756_4d42_9894_3e7fa9107b44.slice/crio-e45656aa358178faa8085500f1feb3bf9ebffeac88f45564b0d293bd7e54f5ae WatchSource:0}: Error finding container e45656aa358178faa8085500f1feb3bf9ebffeac88f45564b0d293bd7e54f5ae: Status 404 returned error can't find the container with id e45656aa358178faa8085500f1feb3bf9ebffeac88f45564b0d293bd7e54f5ae Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.632540 5016 generic.go:334] "Generic (PLEG): container finished" podID="cd1f27a8-8756-4d42-9894-3e7fa9107b44" containerID="63b3bde706c6ea4b407b349163e183f552b59a931031bd174f0dd29d68f87e0a" exitCode=0 Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.632713 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7c7l" event={"ID":"cd1f27a8-8756-4d42-9894-3e7fa9107b44","Type":"ContainerDied","Data":"63b3bde706c6ea4b407b349163e183f552b59a931031bd174f0dd29d68f87e0a"} Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.632935 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7c7l" event={"ID":"cd1f27a8-8756-4d42-9894-3e7fa9107b44","Type":"ContainerStarted","Data":"e45656aa358178faa8085500f1feb3bf9ebffeac88f45564b0d293bd7e54f5ae"} Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.637102 5016 generic.go:334] "Generic (PLEG): container finished" podID="97d69861-d719-49f6-92de-8cc49752f215" containerID="3fed74d749a89e392229063ebbfc9c94809f824fff89fdb7d0d2a6efcd415061" exitCode=0 Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.637154 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kxx6" event={"ID":"97d69861-d719-49f6-92de-8cc49752f215","Type":"ContainerDied","Data":"3fed74d749a89e392229063ebbfc9c94809f824fff89fdb7d0d2a6efcd415061"} Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.641691 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6ccc" event={"ID":"bbca8383-9f95-40bc-be54-9954ad04c402","Type":"ContainerStarted","Data":"05e7461ba2848e0c1828db334e135549012c45eb541036826110e7b81062384b"} Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.717030 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6ccc" podStartSLOduration=2.118399026 podStartE2EDuration="3.717010633s" podCreationTimestamp="2025-10-11 07:44:31 +0000 UTC" firstStartedPulling="2025-10-11 07:44:32.60911812 +0000 UTC m=+260.509574106" lastFinishedPulling="2025-10-11 07:44:34.207729767 +0000 UTC m=+262.108185713" observedRunningTime="2025-10-11 07:44:34.690442218 +0000 UTC m=+262.590898174" watchObservedRunningTime="2025-10-11 07:44:34.717010633 +0000 UTC m=+262.617466579" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.721788 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rtz55"] Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.722952 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.726276 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.729494 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtz55"] Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.843744 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wf69\" (UniqueName: \"kubernetes.io/projected/51f92ef1-71fb-40ad-a7d8-7a2c10420d14-kube-api-access-5wf69\") pod \"community-operators-rtz55\" (UID: \"51f92ef1-71fb-40ad-a7d8-7a2c10420d14\") " pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.843989 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f92ef1-71fb-40ad-a7d8-7a2c10420d14-utilities\") pod \"community-operators-rtz55\" (UID: \"51f92ef1-71fb-40ad-a7d8-7a2c10420d14\") " pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.844094 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f92ef1-71fb-40ad-a7d8-7a2c10420d14-catalog-content\") pod \"community-operators-rtz55\" (UID: \"51f92ef1-71fb-40ad-a7d8-7a2c10420d14\") " pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.945306 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f92ef1-71fb-40ad-a7d8-7a2c10420d14-catalog-content\") pod \"community-operators-rtz55\" (UID: \"51f92ef1-71fb-40ad-a7d8-7a2c10420d14\") " pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.945382 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wf69\" (UniqueName: \"kubernetes.io/projected/51f92ef1-71fb-40ad-a7d8-7a2c10420d14-kube-api-access-5wf69\") pod \"community-operators-rtz55\" (UID: \"51f92ef1-71fb-40ad-a7d8-7a2c10420d14\") " pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.945435 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f92ef1-71fb-40ad-a7d8-7a2c10420d14-utilities\") pod \"community-operators-rtz55\" (UID: \"51f92ef1-71fb-40ad-a7d8-7a2c10420d14\") " pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.945910 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f92ef1-71fb-40ad-a7d8-7a2c10420d14-catalog-content\") pod \"community-operators-rtz55\" (UID: \"51f92ef1-71fb-40ad-a7d8-7a2c10420d14\") " pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.945964 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f92ef1-71fb-40ad-a7d8-7a2c10420d14-utilities\") pod \"community-operators-rtz55\" (UID: \"51f92ef1-71fb-40ad-a7d8-7a2c10420d14\") " pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:34 crc kubenswrapper[5016]: I1011 07:44:34.962917 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wf69\" (UniqueName: \"kubernetes.io/projected/51f92ef1-71fb-40ad-a7d8-7a2c10420d14-kube-api-access-5wf69\") pod \"community-operators-rtz55\" (UID: \"51f92ef1-71fb-40ad-a7d8-7a2c10420d14\") " pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:35 crc kubenswrapper[5016]: I1011 07:44:35.042813 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:35 crc kubenswrapper[5016]: I1011 07:44:35.421315 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rtz55"] Oct 11 07:44:35 crc kubenswrapper[5016]: W1011 07:44:35.431538 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51f92ef1_71fb_40ad_a7d8_7a2c10420d14.slice/crio-0b88fbe0c5d4b068ece1c921ed33c2377b432db70c26904def969bc1d6b10d57 WatchSource:0}: Error finding container 0b88fbe0c5d4b068ece1c921ed33c2377b432db70c26904def969bc1d6b10d57: Status 404 returned error can't find the container with id 0b88fbe0c5d4b068ece1c921ed33c2377b432db70c26904def969bc1d6b10d57 Oct 11 07:44:35 crc kubenswrapper[5016]: I1011 07:44:35.647954 5016 generic.go:334] "Generic (PLEG): container finished" podID="51f92ef1-71fb-40ad-a7d8-7a2c10420d14" containerID="855ee45edcd3538d4792e8adb85e4c868c82ca0f4c94045c02bde92bbcfffb9a" exitCode=0 Oct 11 07:44:35 crc kubenswrapper[5016]: I1011 07:44:35.648016 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtz55" event={"ID":"51f92ef1-71fb-40ad-a7d8-7a2c10420d14","Type":"ContainerDied","Data":"855ee45edcd3538d4792e8adb85e4c868c82ca0f4c94045c02bde92bbcfffb9a"} Oct 11 07:44:35 crc kubenswrapper[5016]: I1011 07:44:35.648041 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtz55" event={"ID":"51f92ef1-71fb-40ad-a7d8-7a2c10420d14","Type":"ContainerStarted","Data":"0b88fbe0c5d4b068ece1c921ed33c2377b432db70c26904def969bc1d6b10d57"} Oct 11 07:44:35 crc kubenswrapper[5016]: I1011 07:44:35.651330 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9kxx6" event={"ID":"97d69861-d719-49f6-92de-8cc49752f215","Type":"ContainerStarted","Data":"7a740a490feeacd142ebf6922de817ef83317864c590e8bea6aab78dbba54325"} Oct 11 07:44:35 crc kubenswrapper[5016]: I1011 07:44:35.685717 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9kxx6" podStartSLOduration=2.229536283 podStartE2EDuration="3.685696956s" podCreationTimestamp="2025-10-11 07:44:32 +0000 UTC" firstStartedPulling="2025-10-11 07:44:33.615669745 +0000 UTC m=+261.516125711" lastFinishedPulling="2025-10-11 07:44:35.071830448 +0000 UTC m=+262.972286384" observedRunningTime="2025-10-11 07:44:35.684143531 +0000 UTC m=+263.584599477" watchObservedRunningTime="2025-10-11 07:44:35.685696956 +0000 UTC m=+263.586152902" Oct 11 07:44:36 crc kubenswrapper[5016]: I1011 07:44:36.659400 5016 generic.go:334] "Generic (PLEG): container finished" podID="cd1f27a8-8756-4d42-9894-3e7fa9107b44" containerID="ee69654b2fcbc9cea1c937140aa37d5fbd55c7a0f94d4a54e7fd6204bd4e0396" exitCode=0 Oct 11 07:44:36 crc kubenswrapper[5016]: I1011 07:44:36.659496 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7c7l" event={"ID":"cd1f27a8-8756-4d42-9894-3e7fa9107b44","Type":"ContainerDied","Data":"ee69654b2fcbc9cea1c937140aa37d5fbd55c7a0f94d4a54e7fd6204bd4e0396"} Oct 11 07:44:36 crc kubenswrapper[5016]: I1011 07:44:36.661788 5016 generic.go:334] "Generic (PLEG): container finished" podID="51f92ef1-71fb-40ad-a7d8-7a2c10420d14" containerID="6aacae88c38d8a6cdd2a25dd0e1de012ae046299195372e2a3d02ceea3cb4e12" exitCode=0 Oct 11 07:44:36 crc kubenswrapper[5016]: I1011 07:44:36.661871 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtz55" event={"ID":"51f92ef1-71fb-40ad-a7d8-7a2c10420d14","Type":"ContainerDied","Data":"6aacae88c38d8a6cdd2a25dd0e1de012ae046299195372e2a3d02ceea3cb4e12"} Oct 11 07:44:38 crc kubenswrapper[5016]: I1011 07:44:38.677103 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7c7l" event={"ID":"cd1f27a8-8756-4d42-9894-3e7fa9107b44","Type":"ContainerStarted","Data":"acb0dc23b8ab8cb7a74e151bdda4829e012687b64840b65d81bb59f884700674"} Oct 11 07:44:38 crc kubenswrapper[5016]: I1011 07:44:38.681598 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rtz55" event={"ID":"51f92ef1-71fb-40ad-a7d8-7a2c10420d14","Type":"ContainerStarted","Data":"92984128446fa026fff0a81198bb55dfca175efd88c25238138d596d6375fd50"} Oct 11 07:44:38 crc kubenswrapper[5016]: I1011 07:44:38.695251 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7c7l" podStartSLOduration=3.196235866 podStartE2EDuration="5.695229717s" podCreationTimestamp="2025-10-11 07:44:33 +0000 UTC" firstStartedPulling="2025-10-11 07:44:34.634497897 +0000 UTC m=+262.534953843" lastFinishedPulling="2025-10-11 07:44:37.133491758 +0000 UTC m=+265.033947694" observedRunningTime="2025-10-11 07:44:38.69321422 +0000 UTC m=+266.593670186" watchObservedRunningTime="2025-10-11 07:44:38.695229717 +0000 UTC m=+266.595685673" Oct 11 07:44:38 crc kubenswrapper[5016]: I1011 07:44:38.712116 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rtz55" podStartSLOduration=3.28379727 podStartE2EDuration="4.712097417s" podCreationTimestamp="2025-10-11 07:44:34 +0000 UTC" firstStartedPulling="2025-10-11 07:44:35.649697953 +0000 UTC m=+263.550153899" lastFinishedPulling="2025-10-11 07:44:37.0779981 +0000 UTC m=+264.978454046" observedRunningTime="2025-10-11 07:44:38.710685396 +0000 UTC m=+266.611141352" watchObservedRunningTime="2025-10-11 07:44:38.712097417 +0000 UTC m=+266.612553363" Oct 11 07:44:41 crc kubenswrapper[5016]: I1011 07:44:41.630192 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:41 crc kubenswrapper[5016]: I1011 07:44:41.630448 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:41 crc kubenswrapper[5016]: I1011 07:44:41.671563 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:41 crc kubenswrapper[5016]: I1011 07:44:41.732131 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6ccc" Oct 11 07:44:42 crc kubenswrapper[5016]: I1011 07:44:42.636323 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:42 crc kubenswrapper[5016]: I1011 07:44:42.636680 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:42 crc kubenswrapper[5016]: I1011 07:44:42.695584 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:42 crc kubenswrapper[5016]: I1011 07:44:42.745712 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9kxx6" Oct 11 07:44:44 crc kubenswrapper[5016]: I1011 07:44:44.037037 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:44 crc kubenswrapper[5016]: I1011 07:44:44.037116 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:44 crc kubenswrapper[5016]: I1011 07:44:44.076325 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:44 crc kubenswrapper[5016]: I1011 07:44:44.750715 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7c7l" Oct 11 07:44:45 crc kubenswrapper[5016]: I1011 07:44:45.043984 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:45 crc kubenswrapper[5016]: I1011 07:44:45.044040 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:45 crc kubenswrapper[5016]: I1011 07:44:45.105547 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:44:45 crc kubenswrapper[5016]: I1011 07:44:45.754399 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rtz55" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.146740 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc"] Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.149064 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.152346 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.155693 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.155992 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc"] Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.281488 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e89eee5-1535-47d4-bd90-c25541ec3e21-config-volume\") pod \"collect-profiles-29336145-7wvsc\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.281564 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e89eee5-1535-47d4-bd90-c25541ec3e21-secret-volume\") pod \"collect-profiles-29336145-7wvsc\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.281622 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbwfm\" (UniqueName: \"kubernetes.io/projected/9e89eee5-1535-47d4-bd90-c25541ec3e21-kube-api-access-sbwfm\") pod \"collect-profiles-29336145-7wvsc\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.383118 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e89eee5-1535-47d4-bd90-c25541ec3e21-config-volume\") pod \"collect-profiles-29336145-7wvsc\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.383199 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e89eee5-1535-47d4-bd90-c25541ec3e21-secret-volume\") pod \"collect-profiles-29336145-7wvsc\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.383231 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbwfm\" (UniqueName: \"kubernetes.io/projected/9e89eee5-1535-47d4-bd90-c25541ec3e21-kube-api-access-sbwfm\") pod \"collect-profiles-29336145-7wvsc\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.384040 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e89eee5-1535-47d4-bd90-c25541ec3e21-config-volume\") pod \"collect-profiles-29336145-7wvsc\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.388734 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e89eee5-1535-47d4-bd90-c25541ec3e21-secret-volume\") pod \"collect-profiles-29336145-7wvsc\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.398009 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbwfm\" (UniqueName: \"kubernetes.io/projected/9e89eee5-1535-47d4-bd90-c25541ec3e21-kube-api-access-sbwfm\") pod \"collect-profiles-29336145-7wvsc\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.482445 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:00 crc kubenswrapper[5016]: I1011 07:45:00.873301 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc"] Oct 11 07:45:01 crc kubenswrapper[5016]: I1011 07:45:01.814908 5016 generic.go:334] "Generic (PLEG): container finished" podID="9e89eee5-1535-47d4-bd90-c25541ec3e21" containerID="41367f6de4591891fbad4112a9fb0a1cc57dfd25e0604cc6703447b16f28d65b" exitCode=0 Oct 11 07:45:01 crc kubenswrapper[5016]: I1011 07:45:01.815000 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" event={"ID":"9e89eee5-1535-47d4-bd90-c25541ec3e21","Type":"ContainerDied","Data":"41367f6de4591891fbad4112a9fb0a1cc57dfd25e0604cc6703447b16f28d65b"} Oct 11 07:45:01 crc kubenswrapper[5016]: I1011 07:45:01.815373 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" event={"ID":"9e89eee5-1535-47d4-bd90-c25541ec3e21","Type":"ContainerStarted","Data":"507ca03e7400eff6246bec1599155c22494c99039b9a4689117273b6678299dc"} Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.065268 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.218678 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e89eee5-1535-47d4-bd90-c25541ec3e21-config-volume\") pod \"9e89eee5-1535-47d4-bd90-c25541ec3e21\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.219090 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e89eee5-1535-47d4-bd90-c25541ec3e21-secret-volume\") pod \"9e89eee5-1535-47d4-bd90-c25541ec3e21\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.219148 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbwfm\" (UniqueName: \"kubernetes.io/projected/9e89eee5-1535-47d4-bd90-c25541ec3e21-kube-api-access-sbwfm\") pod \"9e89eee5-1535-47d4-bd90-c25541ec3e21\" (UID: \"9e89eee5-1535-47d4-bd90-c25541ec3e21\") " Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.219549 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e89eee5-1535-47d4-bd90-c25541ec3e21-config-volume" (OuterVolumeSpecName: "config-volume") pod "9e89eee5-1535-47d4-bd90-c25541ec3e21" (UID: "9e89eee5-1535-47d4-bd90-c25541ec3e21"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.227246 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e89eee5-1535-47d4-bd90-c25541ec3e21-kube-api-access-sbwfm" (OuterVolumeSpecName: "kube-api-access-sbwfm") pod "9e89eee5-1535-47d4-bd90-c25541ec3e21" (UID: "9e89eee5-1535-47d4-bd90-c25541ec3e21"). InnerVolumeSpecName "kube-api-access-sbwfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.228255 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e89eee5-1535-47d4-bd90-c25541ec3e21-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9e89eee5-1535-47d4-bd90-c25541ec3e21" (UID: "9e89eee5-1535-47d4-bd90-c25541ec3e21"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.321083 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e89eee5-1535-47d4-bd90-c25541ec3e21-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.321133 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9e89eee5-1535-47d4-bd90-c25541ec3e21-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.321153 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbwfm\" (UniqueName: \"kubernetes.io/projected/9e89eee5-1535-47d4-bd90-c25541ec3e21-kube-api-access-sbwfm\") on node \"crc\" DevicePath \"\"" Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.832709 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" event={"ID":"9e89eee5-1535-47d4-bd90-c25541ec3e21","Type":"ContainerDied","Data":"507ca03e7400eff6246bec1599155c22494c99039b9a4689117273b6678299dc"} Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.832777 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="507ca03e7400eff6246bec1599155c22494c99039b9a4689117273b6678299dc" Oct 11 07:45:03 crc kubenswrapper[5016]: I1011 07:45:03.832794 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc" Oct 11 07:45:37 crc kubenswrapper[5016]: I1011 07:45:37.122860 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:45:37 crc kubenswrapper[5016]: I1011 07:45:37.123631 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:46:07 crc kubenswrapper[5016]: I1011 07:46:07.122504 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:46:07 crc kubenswrapper[5016]: I1011 07:46:07.124021 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:46:37 crc kubenswrapper[5016]: I1011 07:46:37.122859 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:46:37 crc kubenswrapper[5016]: I1011 07:46:37.123545 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:46:37 crc kubenswrapper[5016]: I1011 07:46:37.123608 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:46:37 crc kubenswrapper[5016]: I1011 07:46:37.124332 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"461462f8d01f467988ce15691a5cd28af5322080f1c3158032b8c6e1ea64bfd3"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 07:46:37 crc kubenswrapper[5016]: I1011 07:46:37.124418 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://461462f8d01f467988ce15691a5cd28af5322080f1c3158032b8c6e1ea64bfd3" gracePeriod=600 Oct 11 07:46:37 crc kubenswrapper[5016]: I1011 07:46:37.402399 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="461462f8d01f467988ce15691a5cd28af5322080f1c3158032b8c6e1ea64bfd3" exitCode=0 Oct 11 07:46:37 crc kubenswrapper[5016]: I1011 07:46:37.403279 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"461462f8d01f467988ce15691a5cd28af5322080f1c3158032b8c6e1ea64bfd3"} Oct 11 07:46:37 crc kubenswrapper[5016]: I1011 07:46:37.403457 5016 scope.go:117] "RemoveContainer" containerID="a12bc03a974fff34817a9f53cb0094da052be55f9b2391fe0f62ec98df12aa6a" Oct 11 07:46:38 crc kubenswrapper[5016]: I1011 07:46:38.409428 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"0bb6e95efc1267f312ef77f8e915572b9364b3b7288f25fafcc7853b98141761"} Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.178614 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4ppr9"] Oct 11 07:47:19 crc kubenswrapper[5016]: E1011 07:47:19.179343 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e89eee5-1535-47d4-bd90-c25541ec3e21" containerName="collect-profiles" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.179417 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e89eee5-1535-47d4-bd90-c25541ec3e21" containerName="collect-profiles" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.179509 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e89eee5-1535-47d4-bd90-c25541ec3e21" containerName="collect-profiles" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.179918 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.195578 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4ppr9"] Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.285135 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.285228 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b18db153-ff84-4ff6-8852-3fba2f64ec4c-trusted-ca\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.285245 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b18db153-ff84-4ff6-8852-3fba2f64ec4c-registry-tls\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.285264 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b18db153-ff84-4ff6-8852-3fba2f64ec4c-registry-certificates\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.285320 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b18db153-ff84-4ff6-8852-3fba2f64ec4c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.285344 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6ddt\" (UniqueName: \"kubernetes.io/projected/b18db153-ff84-4ff6-8852-3fba2f64ec4c-kube-api-access-l6ddt\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.285362 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b18db153-ff84-4ff6-8852-3fba2f64ec4c-bound-sa-token\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.285380 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b18db153-ff84-4ff6-8852-3fba2f64ec4c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.305460 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.386963 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b18db153-ff84-4ff6-8852-3fba2f64ec4c-trusted-ca\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.387008 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b18db153-ff84-4ff6-8852-3fba2f64ec4c-registry-tls\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.387027 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b18db153-ff84-4ff6-8852-3fba2f64ec4c-registry-certificates\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.387060 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b18db153-ff84-4ff6-8852-3fba2f64ec4c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.387077 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6ddt\" (UniqueName: \"kubernetes.io/projected/b18db153-ff84-4ff6-8852-3fba2f64ec4c-kube-api-access-l6ddt\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.387097 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b18db153-ff84-4ff6-8852-3fba2f64ec4c-bound-sa-token\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.387113 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b18db153-ff84-4ff6-8852-3fba2f64ec4c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.387707 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b18db153-ff84-4ff6-8852-3fba2f64ec4c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.388524 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b18db153-ff84-4ff6-8852-3fba2f64ec4c-trusted-ca\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.388920 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b18db153-ff84-4ff6-8852-3fba2f64ec4c-registry-certificates\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.392539 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b18db153-ff84-4ff6-8852-3fba2f64ec4c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.394082 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b18db153-ff84-4ff6-8852-3fba2f64ec4c-registry-tls\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.409166 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b18db153-ff84-4ff6-8852-3fba2f64ec4c-bound-sa-token\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.409561 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6ddt\" (UniqueName: \"kubernetes.io/projected/b18db153-ff84-4ff6-8852-3fba2f64ec4c-kube-api-access-l6ddt\") pod \"image-registry-66df7c8f76-4ppr9\" (UID: \"b18db153-ff84-4ff6-8852-3fba2f64ec4c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.494132 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.686143 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4ppr9"] Oct 11 07:47:19 crc kubenswrapper[5016]: I1011 07:47:19.707636 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" event={"ID":"b18db153-ff84-4ff6-8852-3fba2f64ec4c","Type":"ContainerStarted","Data":"473472751d280e1c27afce481efe2daf0995433352b2a94791eba0d450bfb830"} Oct 11 07:47:20 crc kubenswrapper[5016]: I1011 07:47:20.715699 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" event={"ID":"b18db153-ff84-4ff6-8852-3fba2f64ec4c","Type":"ContainerStarted","Data":"8fe751810db4d365a9a3f910f5f80398d95b347947793069427d6a8cb45168d3"} Oct 11 07:47:20 crc kubenswrapper[5016]: I1011 07:47:20.715882 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:20 crc kubenswrapper[5016]: I1011 07:47:20.735361 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" podStartSLOduration=1.735335227 podStartE2EDuration="1.735335227s" podCreationTimestamp="2025-10-11 07:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:47:20.731462746 +0000 UTC m=+428.631918722" watchObservedRunningTime="2025-10-11 07:47:20.735335227 +0000 UTC m=+428.635791213" Oct 11 07:47:39 crc kubenswrapper[5016]: I1011 07:47:39.505925 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4ppr9" Oct 11 07:47:39 crc kubenswrapper[5016]: I1011 07:47:39.582607 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sq9kp"] Oct 11 07:48:04 crc kubenswrapper[5016]: I1011 07:48:04.620281 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" podUID="b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" containerName="registry" containerID="cri-o://8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387" gracePeriod=30 Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.014267 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.023429 5016 generic.go:334] "Generic (PLEG): container finished" podID="b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" containerID="8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387" exitCode=0 Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.023491 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" event={"ID":"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa","Type":"ContainerDied","Data":"8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387"} Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.023498 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.023539 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sq9kp" event={"ID":"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa","Type":"ContainerDied","Data":"80e789724b857539db35566ea6f10ce947b54ae13c6a4c5a912368a63f9fdac5"} Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.023561 5016 scope.go:117] "RemoveContainer" containerID="8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.050688 5016 scope.go:117] "RemoveContainer" containerID="8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387" Oct 11 07:48:05 crc kubenswrapper[5016]: E1011 07:48:05.051097 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387\": container with ID starting with 8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387 not found: ID does not exist" containerID="8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.051142 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387"} err="failed to get container status \"8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387\": rpc error: code = NotFound desc = could not find container \"8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387\": container with ID starting with 8f4e5f560a23bf4064822719bdc868a129a16fc9e7a5504bebfd59362b9ee387 not found: ID does not exist" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.096507 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.096553 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-tls\") pod \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.096602 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-trusted-ca\") pod \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.096625 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qsrl\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-kube-api-access-8qsrl\") pod \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.097518 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.098159 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-bound-sa-token\") pod \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.098630 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-certificates\") pod \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.099160 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.099223 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-installation-pull-secrets\") pod \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.099548 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-ca-trust-extracted\") pod \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\" (UID: \"b6fae0ac-5622-48ee-9a1a-3997ee8b57aa\") " Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.100005 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.100024 5016 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.103206 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.105861 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.106060 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-kube-api-access-8qsrl" (OuterVolumeSpecName: "kube-api-access-8qsrl") pod "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa"). InnerVolumeSpecName "kube-api-access-8qsrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.106481 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.106666 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.119044 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" (UID: "b6fae0ac-5622-48ee-9a1a-3997ee8b57aa"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.201470 5016 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.201879 5016 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.201977 5016 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.202092 5016 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.202166 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qsrl\" (UniqueName: \"kubernetes.io/projected/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa-kube-api-access-8qsrl\") on node \"crc\" DevicePath \"\"" Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.341280 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sq9kp"] Oct 11 07:48:05 crc kubenswrapper[5016]: I1011 07:48:05.344637 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sq9kp"] Oct 11 07:48:07 crc kubenswrapper[5016]: I1011 07:48:07.142242 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" path="/var/lib/kubelet/pods/b6fae0ac-5622-48ee-9a1a-3997ee8b57aa/volumes" Oct 11 07:48:37 crc kubenswrapper[5016]: I1011 07:48:37.121948 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:48:37 crc kubenswrapper[5016]: I1011 07:48:37.122599 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:49:07 crc kubenswrapper[5016]: I1011 07:49:07.122489 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:49:07 crc kubenswrapper[5016]: I1011 07:49:07.123864 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:49:37 crc kubenswrapper[5016]: I1011 07:49:37.122225 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:49:37 crc kubenswrapper[5016]: I1011 07:49:37.123068 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:49:37 crc kubenswrapper[5016]: I1011 07:49:37.123510 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:49:37 crc kubenswrapper[5016]: I1011 07:49:37.124066 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0bb6e95efc1267f312ef77f8e915572b9364b3b7288f25fafcc7853b98141761"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 07:49:37 crc kubenswrapper[5016]: I1011 07:49:37.124125 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://0bb6e95efc1267f312ef77f8e915572b9364b3b7288f25fafcc7853b98141761" gracePeriod=600 Oct 11 07:49:37 crc kubenswrapper[5016]: I1011 07:49:37.580379 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="0bb6e95efc1267f312ef77f8e915572b9364b3b7288f25fafcc7853b98141761" exitCode=0 Oct 11 07:49:37 crc kubenswrapper[5016]: I1011 07:49:37.580570 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"0bb6e95efc1267f312ef77f8e915572b9364b3b7288f25fafcc7853b98141761"} Oct 11 07:49:37 crc kubenswrapper[5016]: I1011 07:49:37.580705 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"9e47e6adcac812a126122f7057fc0b9abd8d456e8565df449156e69a78cd7a4b"} Oct 11 07:49:37 crc kubenswrapper[5016]: I1011 07:49:37.580731 5016 scope.go:117] "RemoveContainer" containerID="461462f8d01f467988ce15691a5cd28af5322080f1c3158032b8c6e1ea64bfd3" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.262796 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vx8qq"] Oct 11 07:49:50 crc kubenswrapper[5016]: E1011 07:49:50.264034 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" containerName="registry" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.264052 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" containerName="registry" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.264168 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6fae0ac-5622-48ee-9a1a-3997ee8b57aa" containerName="registry" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.264643 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vx8qq" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.266914 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.267287 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.267632 5016 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-sbq6g" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.272343 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vx8qq"] Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.281528 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-5l6fp"] Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.283196 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-5l6fp" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.285140 5016 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-szkwd" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.285220 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-v5lh6"] Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.286082 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.288898 5016 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-r7d2q" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.303509 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmmgh\" (UniqueName: \"kubernetes.io/projected/56428aa7-9fed-488d-af1c-d7a634826bab-kube-api-access-hmmgh\") pod \"cert-manager-cainjector-7f985d654d-vx8qq\" (UID: \"56428aa7-9fed-488d-af1c-d7a634826bab\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vx8qq" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.303510 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-v5lh6"] Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.303578 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf6b8\" (UniqueName: \"kubernetes.io/projected/0fcf1ca9-5b2e-42a5-843a-d9afe721c9dc-kube-api-access-cf6b8\") pod \"cert-manager-webhook-5655c58dd6-v5lh6\" (UID: \"0fcf1ca9-5b2e-42a5-843a-d9afe721c9dc\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.303614 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gkx6\" (UniqueName: \"kubernetes.io/projected/f28820b4-4922-4f2a-961b-11049383375a-kube-api-access-6gkx6\") pod \"cert-manager-5b446d88c5-5l6fp\" (UID: \"f28820b4-4922-4f2a-961b-11049383375a\") " pod="cert-manager/cert-manager-5b446d88c5-5l6fp" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.319400 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-5l6fp"] Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.404854 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmmgh\" (UniqueName: \"kubernetes.io/projected/56428aa7-9fed-488d-af1c-d7a634826bab-kube-api-access-hmmgh\") pod \"cert-manager-cainjector-7f985d654d-vx8qq\" (UID: \"56428aa7-9fed-488d-af1c-d7a634826bab\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vx8qq" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.404933 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf6b8\" (UniqueName: \"kubernetes.io/projected/0fcf1ca9-5b2e-42a5-843a-d9afe721c9dc-kube-api-access-cf6b8\") pod \"cert-manager-webhook-5655c58dd6-v5lh6\" (UID: \"0fcf1ca9-5b2e-42a5-843a-d9afe721c9dc\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.404975 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gkx6\" (UniqueName: \"kubernetes.io/projected/f28820b4-4922-4f2a-961b-11049383375a-kube-api-access-6gkx6\") pod \"cert-manager-5b446d88c5-5l6fp\" (UID: \"f28820b4-4922-4f2a-961b-11049383375a\") " pod="cert-manager/cert-manager-5b446d88c5-5l6fp" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.433612 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmmgh\" (UniqueName: \"kubernetes.io/projected/56428aa7-9fed-488d-af1c-d7a634826bab-kube-api-access-hmmgh\") pod \"cert-manager-cainjector-7f985d654d-vx8qq\" (UID: \"56428aa7-9fed-488d-af1c-d7a634826bab\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-vx8qq" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.435547 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf6b8\" (UniqueName: \"kubernetes.io/projected/0fcf1ca9-5b2e-42a5-843a-d9afe721c9dc-kube-api-access-cf6b8\") pod \"cert-manager-webhook-5655c58dd6-v5lh6\" (UID: \"0fcf1ca9-5b2e-42a5-843a-d9afe721c9dc\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.437825 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gkx6\" (UniqueName: \"kubernetes.io/projected/f28820b4-4922-4f2a-961b-11049383375a-kube-api-access-6gkx6\") pod \"cert-manager-5b446d88c5-5l6fp\" (UID: \"f28820b4-4922-4f2a-961b-11049383375a\") " pod="cert-manager/cert-manager-5b446d88c5-5l6fp" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.587603 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-vx8qq" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.599144 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-5l6fp" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.610481 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.853292 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-vx8qq"] Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.862698 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.882342 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-5l6fp"] Oct 11 07:49:50 crc kubenswrapper[5016]: W1011 07:49:50.888146 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf28820b4_4922_4f2a_961b_11049383375a.slice/crio-23605e78f40a00a74a76bcc44d73c60c618ad34cb059735f2f138db81add5739 WatchSource:0}: Error finding container 23605e78f40a00a74a76bcc44d73c60c618ad34cb059735f2f138db81add5739: Status 404 returned error can't find the container with id 23605e78f40a00a74a76bcc44d73c60c618ad34cb059735f2f138db81add5739 Oct 11 07:49:50 crc kubenswrapper[5016]: W1011 07:49:50.927977 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fcf1ca9_5b2e_42a5_843a_d9afe721c9dc.slice/crio-49d00e4b01c6aef18a627799af023e5cfdedd7d9d29b2e06d9d7b09498cf2017 WatchSource:0}: Error finding container 49d00e4b01c6aef18a627799af023e5cfdedd7d9d29b2e06d9d7b09498cf2017: Status 404 returned error can't find the container with id 49d00e4b01c6aef18a627799af023e5cfdedd7d9d29b2e06d9d7b09498cf2017 Oct 11 07:49:50 crc kubenswrapper[5016]: I1011 07:49:50.930713 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-v5lh6"] Oct 11 07:49:51 crc kubenswrapper[5016]: I1011 07:49:51.679799 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vx8qq" event={"ID":"56428aa7-9fed-488d-af1c-d7a634826bab","Type":"ContainerStarted","Data":"a3f16c9e59a9d2cf1ba2282e8e1c7e49aa55b948cfaad335921d9b32150a66b8"} Oct 11 07:49:51 crc kubenswrapper[5016]: I1011 07:49:51.681514 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" event={"ID":"0fcf1ca9-5b2e-42a5-843a-d9afe721c9dc","Type":"ContainerStarted","Data":"49d00e4b01c6aef18a627799af023e5cfdedd7d9d29b2e06d9d7b09498cf2017"} Oct 11 07:49:51 crc kubenswrapper[5016]: I1011 07:49:51.683065 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-5l6fp" event={"ID":"f28820b4-4922-4f2a-961b-11049383375a","Type":"ContainerStarted","Data":"23605e78f40a00a74a76bcc44d73c60c618ad34cb059735f2f138db81add5739"} Oct 11 07:49:54 crc kubenswrapper[5016]: I1011 07:49:54.696516 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" event={"ID":"0fcf1ca9-5b2e-42a5-843a-d9afe721c9dc","Type":"ContainerStarted","Data":"76f121df717e0bafca096aff45fe80c6385d125c35bdaae16810d7fc3b22f2d7"} Oct 11 07:49:54 crc kubenswrapper[5016]: I1011 07:49:54.697066 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" Oct 11 07:49:54 crc kubenswrapper[5016]: I1011 07:49:54.697802 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-vx8qq" event={"ID":"56428aa7-9fed-488d-af1c-d7a634826bab","Type":"ContainerStarted","Data":"6cf8b1fd9a918cc7f4c9daf5484341d1e18b5b894d5283d5bd1b6872fa586547"} Oct 11 07:49:54 crc kubenswrapper[5016]: I1011 07:49:54.698838 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-5l6fp" event={"ID":"f28820b4-4922-4f2a-961b-11049383375a","Type":"ContainerStarted","Data":"974f11f453e111d6a0a71dc5e169f539453913c207911057f13aa9e584e35837"} Oct 11 07:49:54 crc kubenswrapper[5016]: I1011 07:49:54.711368 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" podStartSLOduration=1.753130531 podStartE2EDuration="4.711355162s" podCreationTimestamp="2025-10-11 07:49:50 +0000 UTC" firstStartedPulling="2025-10-11 07:49:50.929267988 +0000 UTC m=+578.829723934" lastFinishedPulling="2025-10-11 07:49:53.887492619 +0000 UTC m=+581.787948565" observedRunningTime="2025-10-11 07:49:54.709520567 +0000 UTC m=+582.609976513" watchObservedRunningTime="2025-10-11 07:49:54.711355162 +0000 UTC m=+582.611811098" Oct 11 07:49:54 crc kubenswrapper[5016]: I1011 07:49:54.728279 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-vx8qq" podStartSLOduration=1.705409139 podStartE2EDuration="4.728258063s" podCreationTimestamp="2025-10-11 07:49:50 +0000 UTC" firstStartedPulling="2025-10-11 07:49:50.862455871 +0000 UTC m=+578.762911817" lastFinishedPulling="2025-10-11 07:49:53.885304795 +0000 UTC m=+581.785760741" observedRunningTime="2025-10-11 07:49:54.72438838 +0000 UTC m=+582.624844326" watchObservedRunningTime="2025-10-11 07:49:54.728258063 +0000 UTC m=+582.628714009" Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.616994 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-5l6fp" podStartSLOduration=7.547775125 podStartE2EDuration="10.616970118s" podCreationTimestamp="2025-10-11 07:49:50 +0000 UTC" firstStartedPulling="2025-10-11 07:49:50.893731703 +0000 UTC m=+578.794187659" lastFinishedPulling="2025-10-11 07:49:53.962926696 +0000 UTC m=+581.863382652" observedRunningTime="2025-10-11 07:49:54.748080237 +0000 UTC m=+582.648536183" watchObservedRunningTime="2025-10-11 07:50:00.616970118 +0000 UTC m=+588.517426094" Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.618186 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-79nv2"] Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.618952 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovn-controller" containerID="cri-o://b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304" gracePeriod=30 Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.618986 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="nbdb" containerID="cri-o://b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d" gracePeriod=30 Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.619149 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="sbdb" containerID="cri-o://1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8" gracePeriod=30 Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.619145 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764" gracePeriod=30 Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.619196 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="northd" containerID="cri-o://e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c" gracePeriod=30 Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.619200 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovn-acl-logging" containerID="cri-o://df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d" gracePeriod=30 Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.619340 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kube-rbac-proxy-node" containerID="cri-o://bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3" gracePeriod=30 Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.620451 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-v5lh6" Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.663040 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" containerID="cri-o://c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a" gracePeriod=30 Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.745538 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/2.log" Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.747246 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/1.log" Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.747283 5016 generic.go:334] "Generic (PLEG): container finished" podID="48e55d9a-f690-40ae-ba16-e91c4d9d3a95" containerID="f3a05795696442d45f03a1ea37c6e6ba23599cdc17efa338b7d62426d4f98771" exitCode=2 Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.747309 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lbbb2" event={"ID":"48e55d9a-f690-40ae-ba16-e91c4d9d3a95","Type":"ContainerDied","Data":"f3a05795696442d45f03a1ea37c6e6ba23599cdc17efa338b7d62426d4f98771"} Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.747340 5016 scope.go:117] "RemoveContainer" containerID="fdf8f1baa34989ef57e3b44aeb2d3bd578086e2a41bcee8e6cb2d6e6f689fb3e" Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.747954 5016 scope.go:117] "RemoveContainer" containerID="f3a05795696442d45f03a1ea37c6e6ba23599cdc17efa338b7d62426d4f98771" Oct 11 07:50:00 crc kubenswrapper[5016]: E1011 07:50:00.748139 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lbbb2_openshift-multus(48e55d9a-f690-40ae-ba16-e91c4d9d3a95)\"" pod="openshift-multus/multus-lbbb2" podUID="48e55d9a-f690-40ae-ba16-e91c4d9d3a95" Oct 11 07:50:00 crc kubenswrapper[5016]: I1011 07:50:00.998386 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/3.log" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.001755 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovn-acl-logging/0.log" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.002743 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovn-controller/0.log" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.003724 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.060157 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6tfs5"] Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.060672 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.060701 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.060751 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovn-acl-logging" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.060764 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovn-acl-logging" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.060817 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovn-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.060984 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovn-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.061006 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061017 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.061032 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="sbdb" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061042 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="sbdb" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.061060 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061071 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.061089 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="northd" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061100 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="northd" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.061125 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kube-rbac-proxy-node" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061136 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kube-rbac-proxy-node" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.061152 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="nbdb" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061163 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="nbdb" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.061183 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kubecfg-setup" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061194 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kubecfg-setup" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.061207 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061218 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061471 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061491 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kube-rbac-proxy-ovn-metrics" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061511 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovn-acl-logging" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061527 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="kube-rbac-proxy-node" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061538 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="nbdb" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061553 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061566 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="sbdb" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061577 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061589 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovn-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061606 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="northd" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.061822 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.061839 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.062007 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: E1011 07:50:01.062189 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.062204 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.062497 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerName="ovnkube-controller" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.065043 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.151513 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-systemd-units\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.151620 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152413 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152479 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152694 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-var-lib-openvswitch\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152744 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-kubelet\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152778 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-bin\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152792 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152803 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-config\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152824 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152840 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-slash\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152860 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-node-log\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152855 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152888 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-etc-openvswitch\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152912 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-openvswitch\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152939 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-script-lib\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152913 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-node-log" (OuterVolumeSpecName: "node-log") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152954 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152959 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-log-socket\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152938 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-slash" (OuterVolumeSpecName: "host-slash") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152982 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-log-socket" (OuterVolumeSpecName: "log-socket") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.152980 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153024 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-env-overrides\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153070 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-systemd\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153091 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-netns\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153109 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-ovn-kubernetes\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153147 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-netd\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153165 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-ovn\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153174 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153190 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg9zg\" (UniqueName: \"kubernetes.io/projected/68e9f942-5043-4fc3-9133-b608e8cd4ac0-kube-api-access-sg9zg\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153210 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153220 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovn-node-metrics-cert\") pod \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\" (UID: \"68e9f942-5043-4fc3-9133-b608e8cd4ac0\") " Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153238 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153265 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153288 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153398 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-run-netns\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153442 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-ovn-node-metrics-cert\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153469 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbj9h\" (UniqueName: \"kubernetes.io/projected/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-kube-api-access-bbj9h\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153494 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-ovnkube-config\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153517 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-kubelet\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153530 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153549 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-env-overrides\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153573 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153612 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-cni-bin\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153634 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-systemd-units\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153686 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-node-log\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153718 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-etc-openvswitch\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153742 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-run-systemd\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153778 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-cni-netd\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153833 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-run-ovn\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153870 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-ovnkube-script-lib\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153894 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-run-openvswitch\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153917 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-log-socket\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153947 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.153611 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.154918 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-var-lib-openvswitch\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.154966 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-slash\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155040 5016 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155055 5016 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155068 5016 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155079 5016 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155089 5016 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155108 5016 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155120 5016 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-slash\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155131 5016 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-node-log\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155142 5016 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155152 5016 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155163 5016 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-log-socket\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155174 5016 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155184 5016 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155194 5016 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155205 5016 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155216 5016 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.155228 5016 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.159006 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.160947 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e9f942-5043-4fc3-9133-b608e8cd4ac0-kube-api-access-sg9zg" (OuterVolumeSpecName: "kube-api-access-sg9zg") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "kube-api-access-sg9zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.176871 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "68e9f942-5043-4fc3-9133-b608e8cd4ac0" (UID: "68e9f942-5043-4fc3-9133-b608e8cd4ac0"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.256592 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-ovnkube-script-lib\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.256725 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-run-openvswitch\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.256835 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-run-openvswitch\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.256892 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-log-socket\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.256948 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-log-socket\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.256967 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257026 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257082 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-var-lib-openvswitch\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257110 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-var-lib-openvswitch\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257124 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-slash\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257171 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-run-netns\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257210 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-ovn-node-metrics-cert\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257249 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbj9h\" (UniqueName: \"kubernetes.io/projected/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-kube-api-access-bbj9h\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257266 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-slash\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257282 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-ovnkube-config\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257320 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-kubelet\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257366 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-env-overrides\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257399 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257439 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-cni-bin\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257476 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-systemd-units\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257509 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-node-log\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257553 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-etc-openvswitch\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257595 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-run-systemd\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257606 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-run-netns\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257629 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-cni-netd\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257645 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-run-ovn-kubernetes\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257709 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-run-ovn\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257765 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-run-systemd\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257926 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-cni-bin\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.257991 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-node-log\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258027 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-systemd-units\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258062 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-ovnkube-script-lib\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258080 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-cni-netd\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258097 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-etc-openvswitch\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258125 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-run-ovn\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258133 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-host-kubelet\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258162 5016 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/68e9f942-5043-4fc3-9133-b608e8cd4ac0-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258185 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg9zg\" (UniqueName: \"kubernetes.io/projected/68e9f942-5043-4fc3-9133-b608e8cd4ac0-kube-api-access-sg9zg\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258204 5016 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/68e9f942-5043-4fc3-9133-b608e8cd4ac0-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258810 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-env-overrides\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.258867 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-ovnkube-config\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.263278 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-ovn-node-metrics-cert\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.277937 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbj9h\" (UniqueName: \"kubernetes.io/projected/59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc-kube-api-access-bbj9h\") pod \"ovnkube-node-6tfs5\" (UID: \"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc\") " pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.381635 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.755456 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/2.log" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.758603 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovnkube-controller/3.log" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.761062 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovn-acl-logging/0.log" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.761779 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-79nv2_68e9f942-5043-4fc3-9133-b608e8cd4ac0/ovn-controller/0.log" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762339 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a" exitCode=0 Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762368 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8" exitCode=0 Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762382 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d" exitCode=0 Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762393 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c" exitCode=0 Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762422 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764" exitCode=0 Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762431 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3" exitCode=0 Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762443 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d" exitCode=143 Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762452 5016 generic.go:334] "Generic (PLEG): container finished" podID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" containerID="b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304" exitCode=143 Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762518 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762547 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762585 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762598 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762610 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762623 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762638 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762682 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762691 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762698 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762706 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762714 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762763 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762772 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762781 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762792 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762804 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762812 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762840 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762848 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762855 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762863 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762870 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762877 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762885 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762892 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762927 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762939 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762947 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762954 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762962 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762969 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.762976 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763004 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763012 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763019 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763026 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763037 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" event={"ID":"68e9f942-5043-4fc3-9133-b608e8cd4ac0","Type":"ContainerDied","Data":"1f6d475bbbabab2501dc990b8ffde0a4dc42e20e3ea6c299608cfe052b770f83"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763048 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763057 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763086 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763094 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763101 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763109 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763117 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763124 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763132 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763139 5016 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763177 5016 scope.go:117] "RemoveContainer" containerID="c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.763343 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-79nv2" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.769072 5016 generic.go:334] "Generic (PLEG): container finished" podID="59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc" containerID="0427f227553aaab4d6074e144ef2119841abd382c425dd1a131250fd537be9bd" exitCode=0 Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.769114 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerDied","Data":"0427f227553aaab4d6074e144ef2119841abd382c425dd1a131250fd537be9bd"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.769143 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerStarted","Data":"7755a96f4bdec0163961c066b27ccfbdd00a50c5d1538a4fb8fe6afe5575918b"} Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.848190 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.880251 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-79nv2"] Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.881994 5016 scope.go:117] "RemoveContainer" containerID="1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.884742 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-79nv2"] Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.914895 5016 scope.go:117] "RemoveContainer" containerID="b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.931075 5016 scope.go:117] "RemoveContainer" containerID="e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.950578 5016 scope.go:117] "RemoveContainer" containerID="069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764" Oct 11 07:50:01 crc kubenswrapper[5016]: I1011 07:50:01.973758 5016 scope.go:117] "RemoveContainer" containerID="bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.012953 5016 scope.go:117] "RemoveContainer" containerID="df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.053468 5016 scope.go:117] "RemoveContainer" containerID="b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.071441 5016 scope.go:117] "RemoveContainer" containerID="3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.103132 5016 scope.go:117] "RemoveContainer" containerID="c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.103585 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": container with ID starting with c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a not found: ID does not exist" containerID="c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.103624 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} err="failed to get container status \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": rpc error: code = NotFound desc = could not find container \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": container with ID starting with c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.103670 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.103972 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\": container with ID starting with ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2 not found: ID does not exist" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.104009 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} err="failed to get container status \"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\": rpc error: code = NotFound desc = could not find container \"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\": container with ID starting with ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.104035 5016 scope.go:117] "RemoveContainer" containerID="1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.104287 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\": container with ID starting with 1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8 not found: ID does not exist" containerID="1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.104308 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} err="failed to get container status \"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\": rpc error: code = NotFound desc = could not find container \"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\": container with ID starting with 1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.104325 5016 scope.go:117] "RemoveContainer" containerID="b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.104556 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\": container with ID starting with b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d not found: ID does not exist" containerID="b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.104589 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} err="failed to get container status \"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\": rpc error: code = NotFound desc = could not find container \"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\": container with ID starting with b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.104632 5016 scope.go:117] "RemoveContainer" containerID="e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.104901 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\": container with ID starting with e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c not found: ID does not exist" containerID="e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.104923 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} err="failed to get container status \"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\": rpc error: code = NotFound desc = could not find container \"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\": container with ID starting with e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.104941 5016 scope.go:117] "RemoveContainer" containerID="069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.105177 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\": container with ID starting with 069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764 not found: ID does not exist" containerID="069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.105210 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} err="failed to get container status \"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\": rpc error: code = NotFound desc = could not find container \"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\": container with ID starting with 069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.105233 5016 scope.go:117] "RemoveContainer" containerID="bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.105468 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\": container with ID starting with bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3 not found: ID does not exist" containerID="bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.105488 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} err="failed to get container status \"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\": rpc error: code = NotFound desc = could not find container \"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\": container with ID starting with bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.105504 5016 scope.go:117] "RemoveContainer" containerID="df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.105929 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\": container with ID starting with df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d not found: ID does not exist" containerID="df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.105953 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} err="failed to get container status \"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\": rpc error: code = NotFound desc = could not find container \"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\": container with ID starting with df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.105970 5016 scope.go:117] "RemoveContainer" containerID="b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.106184 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\": container with ID starting with b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304 not found: ID does not exist" containerID="b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.106205 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} err="failed to get container status \"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\": rpc error: code = NotFound desc = could not find container \"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\": container with ID starting with b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.106220 5016 scope.go:117] "RemoveContainer" containerID="3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38" Oct 11 07:50:02 crc kubenswrapper[5016]: E1011 07:50:02.106462 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\": container with ID starting with 3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38 not found: ID does not exist" containerID="3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.106503 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38"} err="failed to get container status \"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\": rpc error: code = NotFound desc = could not find container \"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\": container with ID starting with 3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.106531 5016 scope.go:117] "RemoveContainer" containerID="c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.106828 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} err="failed to get container status \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": rpc error: code = NotFound desc = could not find container \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": container with ID starting with c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.106856 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.107443 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} err="failed to get container status \"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\": rpc error: code = NotFound desc = could not find container \"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\": container with ID starting with ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.107504 5016 scope.go:117] "RemoveContainer" containerID="1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.108022 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} err="failed to get container status \"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\": rpc error: code = NotFound desc = could not find container \"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\": container with ID starting with 1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.108059 5016 scope.go:117] "RemoveContainer" containerID="b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.108421 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} err="failed to get container status \"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\": rpc error: code = NotFound desc = could not find container \"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\": container with ID starting with b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.108450 5016 scope.go:117] "RemoveContainer" containerID="e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.108819 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} err="failed to get container status \"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\": rpc error: code = NotFound desc = could not find container \"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\": container with ID starting with e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.108862 5016 scope.go:117] "RemoveContainer" containerID="069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.109354 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} err="failed to get container status \"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\": rpc error: code = NotFound desc = could not find container \"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\": container with ID starting with 069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.109382 5016 scope.go:117] "RemoveContainer" containerID="bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.109737 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} err="failed to get container status \"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\": rpc error: code = NotFound desc = could not find container \"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\": container with ID starting with bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.109973 5016 scope.go:117] "RemoveContainer" containerID="df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.110435 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} err="failed to get container status \"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\": rpc error: code = NotFound desc = could not find container \"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\": container with ID starting with df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.110463 5016 scope.go:117] "RemoveContainer" containerID="b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.110801 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} err="failed to get container status \"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\": rpc error: code = NotFound desc = could not find container \"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\": container with ID starting with b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.111005 5016 scope.go:117] "RemoveContainer" containerID="3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.111354 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38"} err="failed to get container status \"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\": rpc error: code = NotFound desc = could not find container \"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\": container with ID starting with 3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.111383 5016 scope.go:117] "RemoveContainer" containerID="c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.111646 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} err="failed to get container status \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": rpc error: code = NotFound desc = could not find container \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": container with ID starting with c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.111694 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.111993 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} err="failed to get container status \"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\": rpc error: code = NotFound desc = could not find container \"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\": container with ID starting with ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.112026 5016 scope.go:117] "RemoveContainer" containerID="1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.112291 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} err="failed to get container status \"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\": rpc error: code = NotFound desc = could not find container \"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\": container with ID starting with 1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.112317 5016 scope.go:117] "RemoveContainer" containerID="b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.112985 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} err="failed to get container status \"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\": rpc error: code = NotFound desc = could not find container \"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\": container with ID starting with b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.113014 5016 scope.go:117] "RemoveContainer" containerID="e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.113280 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} err="failed to get container status \"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\": rpc error: code = NotFound desc = could not find container \"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\": container with ID starting with e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.113301 5016 scope.go:117] "RemoveContainer" containerID="069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.113522 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} err="failed to get container status \"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\": rpc error: code = NotFound desc = could not find container \"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\": container with ID starting with 069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.113548 5016 scope.go:117] "RemoveContainer" containerID="bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.113803 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} err="failed to get container status \"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\": rpc error: code = NotFound desc = could not find container \"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\": container with ID starting with bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.113829 5016 scope.go:117] "RemoveContainer" containerID="df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.114231 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} err="failed to get container status \"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\": rpc error: code = NotFound desc = could not find container \"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\": container with ID starting with df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.114258 5016 scope.go:117] "RemoveContainer" containerID="b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.114539 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} err="failed to get container status \"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\": rpc error: code = NotFound desc = could not find container \"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\": container with ID starting with b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.114565 5016 scope.go:117] "RemoveContainer" containerID="3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.115201 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38"} err="failed to get container status \"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\": rpc error: code = NotFound desc = could not find container \"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\": container with ID starting with 3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.115283 5016 scope.go:117] "RemoveContainer" containerID="c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.115725 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} err="failed to get container status \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": rpc error: code = NotFound desc = could not find container \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": container with ID starting with c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.115764 5016 scope.go:117] "RemoveContainer" containerID="ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.116119 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2"} err="failed to get container status \"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\": rpc error: code = NotFound desc = could not find container \"ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2\": container with ID starting with ad96cff52344ccb3bb8d462fb9572d996e759d332731d86b1610d2a2a3c873a2 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.116145 5016 scope.go:117] "RemoveContainer" containerID="1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.116374 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8"} err="failed to get container status \"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\": rpc error: code = NotFound desc = could not find container \"1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8\": container with ID starting with 1e797449bd6d81877ce8df4811d37e9b0e55f41b8705f8371ce7dd9b7818ecd8 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.116399 5016 scope.go:117] "RemoveContainer" containerID="b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.117619 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d"} err="failed to get container status \"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\": rpc error: code = NotFound desc = could not find container \"b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d\": container with ID starting with b842897333597c05c386ed419976f43ed0e76d8ad8302b2184cc026c0844ee9d not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.117647 5016 scope.go:117] "RemoveContainer" containerID="e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.119783 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c"} err="failed to get container status \"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\": rpc error: code = NotFound desc = could not find container \"e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c\": container with ID starting with e76d3e1b1365095c650e11472f33fda045de5c143ce24f7a18bc5fddc95d996c not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.119814 5016 scope.go:117] "RemoveContainer" containerID="069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.120995 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764"} err="failed to get container status \"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\": rpc error: code = NotFound desc = could not find container \"069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764\": container with ID starting with 069a974fd393df87f610337f4bf41e4952b9af79a789c73cba1ab15246128764 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.121021 5016 scope.go:117] "RemoveContainer" containerID="bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.121380 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3"} err="failed to get container status \"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\": rpc error: code = NotFound desc = could not find container \"bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3\": container with ID starting with bc81c7aceb0724b56ed6d71773d77639a9a72c6b016f920e741f1e0012dbcab3 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.121410 5016 scope.go:117] "RemoveContainer" containerID="df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.122790 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d"} err="failed to get container status \"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\": rpc error: code = NotFound desc = could not find container \"df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d\": container with ID starting with df96a9e4a0f1ef1c734c51ef6be3b82c0c97fd99948b82616192fc32f9404b2d not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.122820 5016 scope.go:117] "RemoveContainer" containerID="b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.123152 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304"} err="failed to get container status \"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\": rpc error: code = NotFound desc = could not find container \"b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304\": container with ID starting with b777b90710086d940552a7feac562efb0626f095c8f931a3c5e18a0f8be02304 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.123176 5016 scope.go:117] "RemoveContainer" containerID="3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.123662 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38"} err="failed to get container status \"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\": rpc error: code = NotFound desc = could not find container \"3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38\": container with ID starting with 3af39c365b6208288575fc93e7311a79f2e6ee9d444886bde9cb6b815d0e5e38 not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.123682 5016 scope.go:117] "RemoveContainer" containerID="c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.123892 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a"} err="failed to get container status \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": rpc error: code = NotFound desc = could not find container \"c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a\": container with ID starting with c9f8f49cddd357b2feb3c3b26be453b2a4af192d597b5514e8621f1fe857564a not found: ID does not exist" Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.780986 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerStarted","Data":"d780e8aa525cd2971652abc1f9675c445f50972cc335b95052076a5d81a00917"} Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.781263 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerStarted","Data":"80c46d615b6e0b2ab54449b2a5e9f5b3360e936d5ece5d00cb6e8e237807716c"} Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.781279 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerStarted","Data":"36b6db3b1e4988f3b10b3d79fc08707b0b74e3157ecf141d31144130d0e409dc"} Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.781294 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerStarted","Data":"77537f5fe9ee9297254c9be97186ebcd2c87d0d515ab17984f56a9b78cb791bb"} Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.781307 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerStarted","Data":"d35b588f0aa0876f6eb16c2b985a8eaaf30e2383ef65d8e2cedc261cd65996a4"} Oct 11 07:50:02 crc kubenswrapper[5016]: I1011 07:50:02.781319 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerStarted","Data":"b7461d50135e740b61c32e4f67400b10135bd6c8bbd1ac8c4798a6b37a69d281"} Oct 11 07:50:03 crc kubenswrapper[5016]: I1011 07:50:03.146729 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68e9f942-5043-4fc3-9133-b608e8cd4ac0" path="/var/lib/kubelet/pods/68e9f942-5043-4fc3-9133-b608e8cd4ac0/volumes" Oct 11 07:50:05 crc kubenswrapper[5016]: I1011 07:50:05.806250 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerStarted","Data":"084c3b085534301efbcf047cd9df0e7c39a56de3bca1071492ec8ffc86c58432"} Oct 11 07:50:07 crc kubenswrapper[5016]: I1011 07:50:07.821631 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" event={"ID":"59c64ea7-1ccf-4cf3-8140-26c7a2d0cdbc","Type":"ContainerStarted","Data":"d59ed51d69a56493430cddfbcb5238fe2ef0fe78960bf3332e8721c14ebaf83b"} Oct 11 07:50:07 crc kubenswrapper[5016]: I1011 07:50:07.822014 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:07 crc kubenswrapper[5016]: I1011 07:50:07.822031 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:07 crc kubenswrapper[5016]: I1011 07:50:07.850001 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" podStartSLOduration=6.849981446 podStartE2EDuration="6.849981446s" podCreationTimestamp="2025-10-11 07:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:50:07.846268011 +0000 UTC m=+595.746723957" watchObservedRunningTime="2025-10-11 07:50:07.849981446 +0000 UTC m=+595.750437402" Oct 11 07:50:07 crc kubenswrapper[5016]: I1011 07:50:07.853337 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:08 crc kubenswrapper[5016]: I1011 07:50:08.827386 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:08 crc kubenswrapper[5016]: I1011 07:50:08.856113 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:12 crc kubenswrapper[5016]: I1011 07:50:12.133782 5016 scope.go:117] "RemoveContainer" containerID="f3a05795696442d45f03a1ea37c6e6ba23599cdc17efa338b7d62426d4f98771" Oct 11 07:50:12 crc kubenswrapper[5016]: E1011 07:50:12.134449 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lbbb2_openshift-multus(48e55d9a-f690-40ae-ba16-e91c4d9d3a95)\"" pod="openshift-multus/multus-lbbb2" podUID="48e55d9a-f690-40ae-ba16-e91c4d9d3a95" Oct 11 07:50:24 crc kubenswrapper[5016]: I1011 07:50:24.133364 5016 scope.go:117] "RemoveContainer" containerID="f3a05795696442d45f03a1ea37c6e6ba23599cdc17efa338b7d62426d4f98771" Oct 11 07:50:24 crc kubenswrapper[5016]: I1011 07:50:24.945293 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lbbb2_48e55d9a-f690-40ae-ba16-e91c4d9d3a95/kube-multus/2.log" Oct 11 07:50:24 crc kubenswrapper[5016]: I1011 07:50:24.945786 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lbbb2" event={"ID":"48e55d9a-f690-40ae-ba16-e91c4d9d3a95","Type":"ContainerStarted","Data":"9a3cda83496c6908322d6eb46e27018c33f38ca22719c0b29df4a230961c263a"} Oct 11 07:50:31 crc kubenswrapper[5016]: I1011 07:50:31.417818 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6tfs5" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.545110 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn"] Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.547861 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.551185 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.568257 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn"] Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.645390 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.645464 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.645518 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cppmr\" (UniqueName: \"kubernetes.io/projected/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-kube-api-access-cppmr\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.746373 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.746432 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.746465 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cppmr\" (UniqueName: \"kubernetes.io/projected/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-kube-api-access-cppmr\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.747435 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.747490 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.779049 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cppmr\" (UniqueName: \"kubernetes.io/projected/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-kube-api-access-cppmr\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:44 crc kubenswrapper[5016]: I1011 07:50:44.874777 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:45 crc kubenswrapper[5016]: I1011 07:50:45.164016 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn"] Oct 11 07:50:45 crc kubenswrapper[5016]: W1011 07:50:45.175804 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8611f6d_0bd0_44e7_a594_cabd1aa63bfd.slice/crio-b46771d569658bd35ef8364a8dd6793b41bed0be6b5cc7ef4154242422243a36 WatchSource:0}: Error finding container b46771d569658bd35ef8364a8dd6793b41bed0be6b5cc7ef4154242422243a36: Status 404 returned error can't find the container with id b46771d569658bd35ef8364a8dd6793b41bed0be6b5cc7ef4154242422243a36 Oct 11 07:50:46 crc kubenswrapper[5016]: I1011 07:50:46.092550 5016 generic.go:334] "Generic (PLEG): container finished" podID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerID="f90048deb88ae8c70795ba6a7bf623b3625d95e139f3b05dc19613432bba9189" exitCode=0 Oct 11 07:50:46 crc kubenswrapper[5016]: I1011 07:50:46.092648 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" event={"ID":"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd","Type":"ContainerDied","Data":"f90048deb88ae8c70795ba6a7bf623b3625d95e139f3b05dc19613432bba9189"} Oct 11 07:50:46 crc kubenswrapper[5016]: I1011 07:50:46.092942 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" event={"ID":"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd","Type":"ContainerStarted","Data":"b46771d569658bd35ef8364a8dd6793b41bed0be6b5cc7ef4154242422243a36"} Oct 11 07:50:48 crc kubenswrapper[5016]: I1011 07:50:48.107191 5016 generic.go:334] "Generic (PLEG): container finished" podID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerID="23675532f1d898fe0391228d1a18763828530320563249fcab61fe3f54869d1c" exitCode=0 Oct 11 07:50:48 crc kubenswrapper[5016]: I1011 07:50:48.107260 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" event={"ID":"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd","Type":"ContainerDied","Data":"23675532f1d898fe0391228d1a18763828530320563249fcab61fe3f54869d1c"} Oct 11 07:50:49 crc kubenswrapper[5016]: I1011 07:50:49.115384 5016 generic.go:334] "Generic (PLEG): container finished" podID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerID="34133742cc927f7dcd607a7f1c4095e366eecfeafc6ad191480ffe10445337d8" exitCode=0 Oct 11 07:50:49 crc kubenswrapper[5016]: I1011 07:50:49.115437 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" event={"ID":"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd","Type":"ContainerDied","Data":"34133742cc927f7dcd607a7f1c4095e366eecfeafc6ad191480ffe10445337d8"} Oct 11 07:50:50 crc kubenswrapper[5016]: I1011 07:50:50.436746 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:50 crc kubenswrapper[5016]: I1011 07:50:50.628886 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-util\") pod \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " Oct 11 07:50:50 crc kubenswrapper[5016]: I1011 07:50:50.629238 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cppmr\" (UniqueName: \"kubernetes.io/projected/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-kube-api-access-cppmr\") pod \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " Oct 11 07:50:50 crc kubenswrapper[5016]: I1011 07:50:50.629548 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-bundle\") pod \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\" (UID: \"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd\") " Oct 11 07:50:50 crc kubenswrapper[5016]: I1011 07:50:50.630379 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-bundle" (OuterVolumeSpecName: "bundle") pod "c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" (UID: "c8611f6d-0bd0-44e7-a594-cabd1aa63bfd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:50:50 crc kubenswrapper[5016]: I1011 07:50:50.638615 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-kube-api-access-cppmr" (OuterVolumeSpecName: "kube-api-access-cppmr") pod "c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" (UID: "c8611f6d-0bd0-44e7-a594-cabd1aa63bfd"). InnerVolumeSpecName "kube-api-access-cppmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:50:50 crc kubenswrapper[5016]: I1011 07:50:50.730874 5016 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:50 crc kubenswrapper[5016]: I1011 07:50:50.731336 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cppmr\" (UniqueName: \"kubernetes.io/projected/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-kube-api-access-cppmr\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:51 crc kubenswrapper[5016]: I1011 07:50:51.131371 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" event={"ID":"c8611f6d-0bd0-44e7-a594-cabd1aa63bfd","Type":"ContainerDied","Data":"b46771d569658bd35ef8364a8dd6793b41bed0be6b5cc7ef4154242422243a36"} Oct 11 07:50:51 crc kubenswrapper[5016]: I1011 07:50:51.131430 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b46771d569658bd35ef8364a8dd6793b41bed0be6b5cc7ef4154242422243a36" Oct 11 07:50:51 crc kubenswrapper[5016]: I1011 07:50:51.131480 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn" Oct 11 07:50:51 crc kubenswrapper[5016]: I1011 07:50:51.224785 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-util" (OuterVolumeSpecName: "util") pod "c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" (UID: "c8611f6d-0bd0-44e7-a594-cabd1aa63bfd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:50:51 crc kubenswrapper[5016]: I1011 07:50:51.237844 5016 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c8611f6d-0bd0-44e7-a594-cabd1aa63bfd-util\") on node \"crc\" DevicePath \"\"" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.017682 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr"] Oct 11 07:50:52 crc kubenswrapper[5016]: E1011 07:50:52.017880 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerName="util" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.017892 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerName="util" Oct 11 07:50:52 crc kubenswrapper[5016]: E1011 07:50:52.017902 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerName="extract" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.017908 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerName="extract" Oct 11 07:50:52 crc kubenswrapper[5016]: E1011 07:50:52.017926 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerName="pull" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.017933 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerName="pull" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.018026 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8611f6d-0bd0-44e7-a594-cabd1aa63bfd" containerName="extract" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.018370 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.020890 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.021025 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.021102 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2k9nt" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.029555 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr"] Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.048177 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnxl6\" (UniqueName: \"kubernetes.io/projected/81c50358-e340-44b6-b6d4-8edb8ce9b712-kube-api-access-gnxl6\") pod \"nmstate-operator-858ddd8f98-vfsmr\" (UID: \"81c50358-e340-44b6-b6d4-8edb8ce9b712\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.149228 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxl6\" (UniqueName: \"kubernetes.io/projected/81c50358-e340-44b6-b6d4-8edb8ce9b712-kube-api-access-gnxl6\") pod \"nmstate-operator-858ddd8f98-vfsmr\" (UID: \"81c50358-e340-44b6-b6d4-8edb8ce9b712\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.171571 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxl6\" (UniqueName: \"kubernetes.io/projected/81c50358-e340-44b6-b6d4-8edb8ce9b712-kube-api-access-gnxl6\") pod \"nmstate-operator-858ddd8f98-vfsmr\" (UID: \"81c50358-e340-44b6-b6d4-8edb8ce9b712\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.330236 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr" Oct 11 07:50:52 crc kubenswrapper[5016]: I1011 07:50:52.548615 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr"] Oct 11 07:50:52 crc kubenswrapper[5016]: W1011 07:50:52.550048 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81c50358_e340_44b6_b6d4_8edb8ce9b712.slice/crio-6da054a2080eca8b4e3e9722fedf94b80ea95e243129c6a8a64c7360db32f867 WatchSource:0}: Error finding container 6da054a2080eca8b4e3e9722fedf94b80ea95e243129c6a8a64c7360db32f867: Status 404 returned error can't find the container with id 6da054a2080eca8b4e3e9722fedf94b80ea95e243129c6a8a64c7360db32f867 Oct 11 07:50:53 crc kubenswrapper[5016]: I1011 07:50:53.158980 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr" event={"ID":"81c50358-e340-44b6-b6d4-8edb8ce9b712","Type":"ContainerStarted","Data":"6da054a2080eca8b4e3e9722fedf94b80ea95e243129c6a8a64c7360db32f867"} Oct 11 07:50:55 crc kubenswrapper[5016]: I1011 07:50:55.173138 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr" event={"ID":"81c50358-e340-44b6-b6d4-8edb8ce9b712","Type":"ContainerStarted","Data":"5399ea1c85dec2e9e845c246b1c91f61639efb15a3392838d7616f21eb93bb69"} Oct 11 07:50:55 crc kubenswrapper[5016]: I1011 07:50:55.191541 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-vfsmr" podStartSLOduration=0.966260241 podStartE2EDuration="3.191520401s" podCreationTimestamp="2025-10-11 07:50:52 +0000 UTC" firstStartedPulling="2025-10-11 07:50:52.551823173 +0000 UTC m=+640.452279139" lastFinishedPulling="2025-10-11 07:50:54.777083353 +0000 UTC m=+642.677539299" observedRunningTime="2025-10-11 07:50:55.188787214 +0000 UTC m=+643.089243200" watchObservedRunningTime="2025-10-11 07:50:55.191520401 +0000 UTC m=+643.091976377" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.259112 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk"] Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.260561 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.263011 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9b6fq" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.264583 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg"] Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.265378 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.268391 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.275298 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg"] Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.280281 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk"] Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.314754 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-kcw7v"] Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.315532 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.321601 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9hs2\" (UniqueName: \"kubernetes.io/projected/33757068-b0e8-4558-8190-b98cd699b641-kube-api-access-l9hs2\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.321645 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/33757068-b0e8-4558-8190-b98cd699b641-dbus-socket\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.321752 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/33757068-b0e8-4558-8190-b98cd699b641-nmstate-lock\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.321821 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjgn\" (UniqueName: \"kubernetes.io/projected/89d255e9-3579-4fc6-a528-c87e21b5f9a4-kube-api-access-nhjgn\") pod \"nmstate-metrics-fdff9cb8d-qbbbk\" (UID: \"89d255e9-3579-4fc6-a528-c87e21b5f9a4\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.321922 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8h8w\" (UniqueName: \"kubernetes.io/projected/3c5ef3cc-6a0f-4187-8c24-db02d2144312-kube-api-access-t8h8w\") pod \"nmstate-webhook-6cdbc54649-56bkg\" (UID: \"3c5ef3cc-6a0f-4187-8c24-db02d2144312\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.321970 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/33757068-b0e8-4558-8190-b98cd699b641-ovs-socket\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.321997 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3c5ef3cc-6a0f-4187-8c24-db02d2144312-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-56bkg\" (UID: \"3c5ef3cc-6a0f-4187-8c24-db02d2144312\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.411362 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp"] Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.412194 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.419952 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp"] Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.420358 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.420933 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-4m89t" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.421046 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.431572 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/33757068-b0e8-4558-8190-b98cd699b641-nmstate-lock\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.431645 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjgn\" (UniqueName: \"kubernetes.io/projected/89d255e9-3579-4fc6-a528-c87e21b5f9a4-kube-api-access-nhjgn\") pod \"nmstate-metrics-fdff9cb8d-qbbbk\" (UID: \"89d255e9-3579-4fc6-a528-c87e21b5f9a4\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.431722 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/33757068-b0e8-4558-8190-b98cd699b641-nmstate-lock\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.431957 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8h8w\" (UniqueName: \"kubernetes.io/projected/3c5ef3cc-6a0f-4187-8c24-db02d2144312-kube-api-access-t8h8w\") pod \"nmstate-webhook-6cdbc54649-56bkg\" (UID: \"3c5ef3cc-6a0f-4187-8c24-db02d2144312\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.432004 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/33757068-b0e8-4558-8190-b98cd699b641-ovs-socket\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.432045 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3c5ef3cc-6a0f-4187-8c24-db02d2144312-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-56bkg\" (UID: \"3c5ef3cc-6a0f-4187-8c24-db02d2144312\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.432086 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9hs2\" (UniqueName: \"kubernetes.io/projected/33757068-b0e8-4558-8190-b98cd699b641-kube-api-access-l9hs2\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.432165 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/33757068-b0e8-4558-8190-b98cd699b641-dbus-socket\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: E1011 07:50:56.432197 5016 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 11 07:50:56 crc kubenswrapper[5016]: E1011 07:50:56.432250 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c5ef3cc-6a0f-4187-8c24-db02d2144312-tls-key-pair podName:3c5ef3cc-6a0f-4187-8c24-db02d2144312 nodeName:}" failed. No retries permitted until 2025-10-11 07:50:56.932232064 +0000 UTC m=+644.832688100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3c5ef3cc-6a0f-4187-8c24-db02d2144312-tls-key-pair") pod "nmstate-webhook-6cdbc54649-56bkg" (UID: "3c5ef3cc-6a0f-4187-8c24-db02d2144312") : secret "openshift-nmstate-webhook" not found Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.432112 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/33757068-b0e8-4558-8190-b98cd699b641-ovs-socket\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.432561 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/33757068-b0e8-4558-8190-b98cd699b641-dbus-socket\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.454333 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8h8w\" (UniqueName: \"kubernetes.io/projected/3c5ef3cc-6a0f-4187-8c24-db02d2144312-kube-api-access-t8h8w\") pod \"nmstate-webhook-6cdbc54649-56bkg\" (UID: \"3c5ef3cc-6a0f-4187-8c24-db02d2144312\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.454354 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjgn\" (UniqueName: \"kubernetes.io/projected/89d255e9-3579-4fc6-a528-c87e21b5f9a4-kube-api-access-nhjgn\") pod \"nmstate-metrics-fdff9cb8d-qbbbk\" (UID: \"89d255e9-3579-4fc6-a528-c87e21b5f9a4\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.473789 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9hs2\" (UniqueName: \"kubernetes.io/projected/33757068-b0e8-4558-8190-b98cd699b641-kube-api-access-l9hs2\") pod \"nmstate-handler-kcw7v\" (UID: \"33757068-b0e8-4558-8190-b98cd699b641\") " pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.532914 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5699eaf-722d-46fd-8bca-5f845e0b5b3c-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-gtjdp\" (UID: \"d5699eaf-722d-46fd-8bca-5f845e0b5b3c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.532964 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d5699eaf-722d-46fd-8bca-5f845e0b5b3c-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-gtjdp\" (UID: \"d5699eaf-722d-46fd-8bca-5f845e0b5b3c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.533002 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfmcq\" (UniqueName: \"kubernetes.io/projected/d5699eaf-722d-46fd-8bca-5f845e0b5b3c-kube-api-access-cfmcq\") pod \"nmstate-console-plugin-6b874cbd85-gtjdp\" (UID: \"d5699eaf-722d-46fd-8bca-5f845e0b5b3c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.579128 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.594887 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7d46cdf4c7-kr94x"] Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.595759 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.627184 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.634375 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5699eaf-722d-46fd-8bca-5f845e0b5b3c-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-gtjdp\" (UID: \"d5699eaf-722d-46fd-8bca-5f845e0b5b3c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.634416 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d5699eaf-722d-46fd-8bca-5f845e0b5b3c-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-gtjdp\" (UID: \"d5699eaf-722d-46fd-8bca-5f845e0b5b3c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.634458 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfmcq\" (UniqueName: \"kubernetes.io/projected/d5699eaf-722d-46fd-8bca-5f845e0b5b3c-kube-api-access-cfmcq\") pod \"nmstate-console-plugin-6b874cbd85-gtjdp\" (UID: \"d5699eaf-722d-46fd-8bca-5f845e0b5b3c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.636930 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d5699eaf-722d-46fd-8bca-5f845e0b5b3c-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-gtjdp\" (UID: \"d5699eaf-722d-46fd-8bca-5f845e0b5b3c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.640005 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d5699eaf-722d-46fd-8bca-5f845e0b5b3c-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-gtjdp\" (UID: \"d5699eaf-722d-46fd-8bca-5f845e0b5b3c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.650103 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d46cdf4c7-kr94x"] Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.659330 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfmcq\" (UniqueName: \"kubernetes.io/projected/d5699eaf-722d-46fd-8bca-5f845e0b5b3c-kube-api-access-cfmcq\") pod \"nmstate-console-plugin-6b874cbd85-gtjdp\" (UID: \"d5699eaf-722d-46fd-8bca-5f845e0b5b3c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.735621 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-oauth-serving-cert\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.735919 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b18eb86c-ca67-4c44-a023-c18c6f42490a-console-serving-cert\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.735959 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-557g4\" (UniqueName: \"kubernetes.io/projected/b18eb86c-ca67-4c44-a023-c18c6f42490a-kube-api-access-557g4\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.736012 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-service-ca\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.736044 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-console-config\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.736095 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-trusted-ca-bundle\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.736140 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b18eb86c-ca67-4c44-a023-c18c6f42490a-console-oauth-config\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.743060 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.777434 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk"] Oct 11 07:50:56 crc kubenswrapper[5016]: W1011 07:50:56.787290 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89d255e9_3579_4fc6_a528_c87e21b5f9a4.slice/crio-4c3b2098635d0284e5aae3fdd294e61a5f131fc90efcbe5caa550a038f1e01b2 WatchSource:0}: Error finding container 4c3b2098635d0284e5aae3fdd294e61a5f131fc90efcbe5caa550a038f1e01b2: Status 404 returned error can't find the container with id 4c3b2098635d0284e5aae3fdd294e61a5f131fc90efcbe5caa550a038f1e01b2 Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.837272 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b18eb86c-ca67-4c44-a023-c18c6f42490a-console-serving-cert\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.837316 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-oauth-serving-cert\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.837346 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-557g4\" (UniqueName: \"kubernetes.io/projected/b18eb86c-ca67-4c44-a023-c18c6f42490a-kube-api-access-557g4\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.837380 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-console-config\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.837395 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-service-ca\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.837431 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-trusted-ca-bundle\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.837470 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b18eb86c-ca67-4c44-a023-c18c6f42490a-console-oauth-config\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.839031 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-oauth-serving-cert\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.839486 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-service-ca\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.839731 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-trusted-ca-bundle\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.840214 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b18eb86c-ca67-4c44-a023-c18c6f42490a-console-config\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.842276 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b18eb86c-ca67-4c44-a023-c18c6f42490a-console-oauth-config\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.842504 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b18eb86c-ca67-4c44-a023-c18c6f42490a-console-serving-cert\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.856524 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-557g4\" (UniqueName: \"kubernetes.io/projected/b18eb86c-ca67-4c44-a023-c18c6f42490a-kube-api-access-557g4\") pod \"console-7d46cdf4c7-kr94x\" (UID: \"b18eb86c-ca67-4c44-a023-c18c6f42490a\") " pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.918030 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp"] Oct 11 07:50:56 crc kubenswrapper[5016]: W1011 07:50:56.922853 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5699eaf_722d_46fd_8bca_5f845e0b5b3c.slice/crio-6c5ed760751b8e5d95db19cf66f1d566559a3cd6b1bc29b17036b02ba4110398 WatchSource:0}: Error finding container 6c5ed760751b8e5d95db19cf66f1d566559a3cd6b1bc29b17036b02ba4110398: Status 404 returned error can't find the container with id 6c5ed760751b8e5d95db19cf66f1d566559a3cd6b1bc29b17036b02ba4110398 Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.937380 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.938500 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3c5ef3cc-6a0f-4187-8c24-db02d2144312-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-56bkg\" (UID: \"3c5ef3cc-6a0f-4187-8c24-db02d2144312\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:50:56 crc kubenswrapper[5016]: I1011 07:50:56.941212 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3c5ef3cc-6a0f-4187-8c24-db02d2144312-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-56bkg\" (UID: \"3c5ef3cc-6a0f-4187-8c24-db02d2144312\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:50:57 crc kubenswrapper[5016]: I1011 07:50:57.187047 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:50:57 crc kubenswrapper[5016]: I1011 07:50:57.187838 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" event={"ID":"d5699eaf-722d-46fd-8bca-5f845e0b5b3c","Type":"ContainerStarted","Data":"6c5ed760751b8e5d95db19cf66f1d566559a3cd6b1bc29b17036b02ba4110398"} Oct 11 07:50:57 crc kubenswrapper[5016]: I1011 07:50:57.189146 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk" event={"ID":"89d255e9-3579-4fc6-a528-c87e21b5f9a4","Type":"ContainerStarted","Data":"4c3b2098635d0284e5aae3fdd294e61a5f131fc90efcbe5caa550a038f1e01b2"} Oct 11 07:50:57 crc kubenswrapper[5016]: I1011 07:50:57.191960 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kcw7v" event={"ID":"33757068-b0e8-4558-8190-b98cd699b641","Type":"ContainerStarted","Data":"bf554478826c19f0ae28d70c5d3cd930feaa1e1ca682a6957e5f1951f55e36b4"} Oct 11 07:50:57 crc kubenswrapper[5016]: I1011 07:50:57.305166 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7d46cdf4c7-kr94x"] Oct 11 07:50:57 crc kubenswrapper[5016]: W1011 07:50:57.310486 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb18eb86c_ca67_4c44_a023_c18c6f42490a.slice/crio-042f6a793e8e2e3b2ede987e3c949dd92cd6992517a20ab238526a10752cdc19 WatchSource:0}: Error finding container 042f6a793e8e2e3b2ede987e3c949dd92cd6992517a20ab238526a10752cdc19: Status 404 returned error can't find the container with id 042f6a793e8e2e3b2ede987e3c949dd92cd6992517a20ab238526a10752cdc19 Oct 11 07:50:57 crc kubenswrapper[5016]: I1011 07:50:57.375672 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg"] Oct 11 07:50:58 crc kubenswrapper[5016]: I1011 07:50:58.198569 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" event={"ID":"3c5ef3cc-6a0f-4187-8c24-db02d2144312","Type":"ContainerStarted","Data":"92fc3f9f22db791592575bbe2c74cf2f380d7fc6b484d3ab349f15969e8634fd"} Oct 11 07:50:58 crc kubenswrapper[5016]: I1011 07:50:58.200078 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d46cdf4c7-kr94x" event={"ID":"b18eb86c-ca67-4c44-a023-c18c6f42490a","Type":"ContainerStarted","Data":"0d0d33b7e43638536a9c6296899ef593c6c1e8011829e054cb3b0344817ec732"} Oct 11 07:50:58 crc kubenswrapper[5016]: I1011 07:50:58.200103 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7d46cdf4c7-kr94x" event={"ID":"b18eb86c-ca67-4c44-a023-c18c6f42490a","Type":"ContainerStarted","Data":"042f6a793e8e2e3b2ede987e3c949dd92cd6992517a20ab238526a10752cdc19"} Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.212178 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" event={"ID":"3c5ef3cc-6a0f-4187-8c24-db02d2144312","Type":"ContainerStarted","Data":"54684bb07bd7c170bff0b87eaa164ff615eae16598fc7bd88304fcb9061736ef"} Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.212712 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.214250 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" event={"ID":"d5699eaf-722d-46fd-8bca-5f845e0b5b3c","Type":"ContainerStarted","Data":"01b37f0152000ed8cd97611c588b1cdd05d33dc721bef229a5e096facaf48296"} Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.217512 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk" event={"ID":"89d255e9-3579-4fc6-a528-c87e21b5f9a4","Type":"ContainerStarted","Data":"6408ee44dd76c3abd31eb0338e450356a7a936d41dfb9677c85b40e708a69ed5"} Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.219054 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kcw7v" event={"ID":"33757068-b0e8-4558-8190-b98cd699b641","Type":"ContainerStarted","Data":"a467795812c7e0a6b90edc2e35f918c9ff57d653ceec2a41c8368dd26a892623"} Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.219297 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.233152 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" podStartSLOduration=2.399990922 podStartE2EDuration="4.233133954s" podCreationTimestamp="2025-10-11 07:50:56 +0000 UTC" firstStartedPulling="2025-10-11 07:50:57.393415883 +0000 UTC m=+645.293871839" lastFinishedPulling="2025-10-11 07:50:59.226558885 +0000 UTC m=+647.127014871" observedRunningTime="2025-10-11 07:51:00.232274699 +0000 UTC m=+648.132730645" watchObservedRunningTime="2025-10-11 07:51:00.233133954 +0000 UTC m=+648.133589890" Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.233729 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7d46cdf4c7-kr94x" podStartSLOduration=4.23371927 podStartE2EDuration="4.23371927s" podCreationTimestamp="2025-10-11 07:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:50:58.216851847 +0000 UTC m=+646.117307813" watchObservedRunningTime="2025-10-11 07:51:00.23371927 +0000 UTC m=+648.134175216" Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.279787 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-kcw7v" podStartSLOduration=1.735622184 podStartE2EDuration="4.279768409s" podCreationTimestamp="2025-10-11 07:50:56 +0000 UTC" firstStartedPulling="2025-10-11 07:50:56.678637483 +0000 UTC m=+644.579093429" lastFinishedPulling="2025-10-11 07:50:59.222783668 +0000 UTC m=+647.123239654" observedRunningTime="2025-10-11 07:51:00.253175159 +0000 UTC m=+648.153631175" watchObservedRunningTime="2025-10-11 07:51:00.279768409 +0000 UTC m=+648.180224355" Oct 11 07:51:00 crc kubenswrapper[5016]: I1011 07:51:00.281339 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-gtjdp" podStartSLOduration=1.9872828820000001 podStartE2EDuration="4.281329233s" podCreationTimestamp="2025-10-11 07:50:56 +0000 UTC" firstStartedPulling="2025-10-11 07:50:56.925467725 +0000 UTC m=+644.825923661" lastFinishedPulling="2025-10-11 07:50:59.219514026 +0000 UTC m=+647.119970012" observedRunningTime="2025-10-11 07:51:00.275264412 +0000 UTC m=+648.175720398" watchObservedRunningTime="2025-10-11 07:51:00.281329233 +0000 UTC m=+648.181785199" Oct 11 07:51:02 crc kubenswrapper[5016]: I1011 07:51:02.233982 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk" event={"ID":"89d255e9-3579-4fc6-a528-c87e21b5f9a4","Type":"ContainerStarted","Data":"34bceb1744860e9ddbb5db8c8f71fe2ba7cf1fe82f6b4650fc23aeba978cee29"} Oct 11 07:51:02 crc kubenswrapper[5016]: I1011 07:51:02.266016 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-qbbbk" podStartSLOduration=1.668420699 podStartE2EDuration="6.265955063s" podCreationTimestamp="2025-10-11 07:50:56 +0000 UTC" firstStartedPulling="2025-10-11 07:50:56.789731086 +0000 UTC m=+644.690187032" lastFinishedPulling="2025-10-11 07:51:01.38726544 +0000 UTC m=+649.287721396" observedRunningTime="2025-10-11 07:51:02.257953069 +0000 UTC m=+650.158409025" watchObservedRunningTime="2025-10-11 07:51:02.265955063 +0000 UTC m=+650.166411049" Oct 11 07:51:06 crc kubenswrapper[5016]: I1011 07:51:06.670719 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-kcw7v" Oct 11 07:51:06 crc kubenswrapper[5016]: I1011 07:51:06.938645 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:51:06 crc kubenswrapper[5016]: I1011 07:51:06.939083 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:51:06 crc kubenswrapper[5016]: I1011 07:51:06.946989 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:51:07 crc kubenswrapper[5016]: I1011 07:51:07.275371 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7d46cdf4c7-kr94x" Oct 11 07:51:07 crc kubenswrapper[5016]: I1011 07:51:07.344420 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vmvvh"] Oct 11 07:51:17 crc kubenswrapper[5016]: I1011 07:51:17.199024 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-56bkg" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.366392 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6"] Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.369697 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.372054 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.389541 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6"] Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.465757 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.465805 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4x2\" (UniqueName: \"kubernetes.io/projected/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-kube-api-access-xs4x2\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.465976 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.567196 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4x2\" (UniqueName: \"kubernetes.io/projected/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-kube-api-access-xs4x2\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.567357 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.567431 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.568475 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.568570 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.586762 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4x2\" (UniqueName: \"kubernetes.io/projected/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-kube-api-access-xs4x2\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:31 crc kubenswrapper[5016]: I1011 07:51:31.720304 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.135174 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6"] Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.433970 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vmvvh" podUID="eb6630cb-0062-4461-bf51-c45f7e4e7478" containerName="console" containerID="cri-o://b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb" gracePeriod=15 Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.437108 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" event={"ID":"ef59f675-6d27-45e7-b50c-5ff68f9d41d2","Type":"ContainerStarted","Data":"bca7fa4a6aef3ebb669e992ab216de701ca0639893556407041a84234983beec"} Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.437155 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" event={"ID":"ef59f675-6d27-45e7-b50c-5ff68f9d41d2","Type":"ContainerStarted","Data":"2b4a130bef6adc99558db477136f3e913b36fa25aff08fabfafba93408f23822"} Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.835750 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vmvvh_eb6630cb-0062-4461-bf51-c45f7e4e7478/console/0.log" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.835815 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.888330 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-oauth-config\") pod \"eb6630cb-0062-4461-bf51-c45f7e4e7478\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.888425 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-service-ca\") pod \"eb6630cb-0062-4461-bf51-c45f7e4e7478\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.888455 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-config\") pod \"eb6630cb-0062-4461-bf51-c45f7e4e7478\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.888520 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-oauth-serving-cert\") pod \"eb6630cb-0062-4461-bf51-c45f7e4e7478\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.888577 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-trusted-ca-bundle\") pod \"eb6630cb-0062-4461-bf51-c45f7e4e7478\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.888621 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-serving-cert\") pod \"eb6630cb-0062-4461-bf51-c45f7e4e7478\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.888663 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm66b\" (UniqueName: \"kubernetes.io/projected/eb6630cb-0062-4461-bf51-c45f7e4e7478-kube-api-access-rm66b\") pod \"eb6630cb-0062-4461-bf51-c45f7e4e7478\" (UID: \"eb6630cb-0062-4461-bf51-c45f7e4e7478\") " Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.889805 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-config" (OuterVolumeSpecName: "console-config") pod "eb6630cb-0062-4461-bf51-c45f7e4e7478" (UID: "eb6630cb-0062-4461-bf51-c45f7e4e7478"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.889906 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "eb6630cb-0062-4461-bf51-c45f7e4e7478" (UID: "eb6630cb-0062-4461-bf51-c45f7e4e7478"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.890247 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-service-ca" (OuterVolumeSpecName: "service-ca") pod "eb6630cb-0062-4461-bf51-c45f7e4e7478" (UID: "eb6630cb-0062-4461-bf51-c45f7e4e7478"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.890294 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "eb6630cb-0062-4461-bf51-c45f7e4e7478" (UID: "eb6630cb-0062-4461-bf51-c45f7e4e7478"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.896562 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "eb6630cb-0062-4461-bf51-c45f7e4e7478" (UID: "eb6630cb-0062-4461-bf51-c45f7e4e7478"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.899573 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6630cb-0062-4461-bf51-c45f7e4e7478-kube-api-access-rm66b" (OuterVolumeSpecName: "kube-api-access-rm66b") pod "eb6630cb-0062-4461-bf51-c45f7e4e7478" (UID: "eb6630cb-0062-4461-bf51-c45f7e4e7478"). InnerVolumeSpecName "kube-api-access-rm66b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.902012 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "eb6630cb-0062-4461-bf51-c45f7e4e7478" (UID: "eb6630cb-0062-4461-bf51-c45f7e4e7478"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.990295 5016 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.990341 5016 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.990357 5016 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.990369 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm66b\" (UniqueName: \"kubernetes.io/projected/eb6630cb-0062-4461-bf51-c45f7e4e7478-kube-api-access-rm66b\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.990386 5016 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.990398 5016 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-service-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:32 crc kubenswrapper[5016]: I1011 07:51:32.990409 5016 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/eb6630cb-0062-4461-bf51-c45f7e4e7478-console-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.446259 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vmvvh_eb6630cb-0062-4461-bf51-c45f7e4e7478/console/0.log" Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.446326 5016 generic.go:334] "Generic (PLEG): container finished" podID="eb6630cb-0062-4461-bf51-c45f7e4e7478" containerID="b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb" exitCode=2 Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.446501 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vmvvh" Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.446527 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vmvvh" event={"ID":"eb6630cb-0062-4461-bf51-c45f7e4e7478","Type":"ContainerDied","Data":"b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb"} Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.446581 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vmvvh" event={"ID":"eb6630cb-0062-4461-bf51-c45f7e4e7478","Type":"ContainerDied","Data":"0ef7489c7b47009888cca218c3fa3f8877247b33b26070ace8d698dcfc7bbe68"} Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.446607 5016 scope.go:117] "RemoveContainer" containerID="b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb" Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.449471 5016 generic.go:334] "Generic (PLEG): container finished" podID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerID="bca7fa4a6aef3ebb669e992ab216de701ca0639893556407041a84234983beec" exitCode=0 Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.449548 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" event={"ID":"ef59f675-6d27-45e7-b50c-5ff68f9d41d2","Type":"ContainerDied","Data":"bca7fa4a6aef3ebb669e992ab216de701ca0639893556407041a84234983beec"} Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.487964 5016 scope.go:117] "RemoveContainer" containerID="b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb" Oct 11 07:51:33 crc kubenswrapper[5016]: E1011 07:51:33.488860 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb\": container with ID starting with b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb not found: ID does not exist" containerID="b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb" Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.488908 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb"} err="failed to get container status \"b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb\": rpc error: code = NotFound desc = could not find container \"b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb\": container with ID starting with b20f405e09b0b6d4755fc5f7176d05f3d0ccee3a0df9ef8ab32278ccfeb233cb not found: ID does not exist" Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.503957 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vmvvh"] Oct 11 07:51:33 crc kubenswrapper[5016]: I1011 07:51:33.510575 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vmvvh"] Oct 11 07:51:35 crc kubenswrapper[5016]: I1011 07:51:35.139910 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6630cb-0062-4461-bf51-c45f7e4e7478" path="/var/lib/kubelet/pods/eb6630cb-0062-4461-bf51-c45f7e4e7478/volumes" Oct 11 07:51:35 crc kubenswrapper[5016]: I1011 07:51:35.468159 5016 generic.go:334] "Generic (PLEG): container finished" podID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerID="ddf62ba9ffb9b2089ee0c0c68c7d8cde2b008cd9a6f2303c989cee04f81b6f9f" exitCode=0 Oct 11 07:51:35 crc kubenswrapper[5016]: I1011 07:51:35.468231 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" event={"ID":"ef59f675-6d27-45e7-b50c-5ff68f9d41d2","Type":"ContainerDied","Data":"ddf62ba9ffb9b2089ee0c0c68c7d8cde2b008cd9a6f2303c989cee04f81b6f9f"} Oct 11 07:51:36 crc kubenswrapper[5016]: I1011 07:51:36.476790 5016 generic.go:334] "Generic (PLEG): container finished" podID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerID="32ac776c54bd1a82fcd193ec525da4923b09d09d9299fa8ca32f8a55713e578c" exitCode=0 Oct 11 07:51:36 crc kubenswrapper[5016]: I1011 07:51:36.477462 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" event={"ID":"ef59f675-6d27-45e7-b50c-5ff68f9d41d2","Type":"ContainerDied","Data":"32ac776c54bd1a82fcd193ec525da4923b09d09d9299fa8ca32f8a55713e578c"} Oct 11 07:51:37 crc kubenswrapper[5016]: I1011 07:51:37.122571 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:51:37 crc kubenswrapper[5016]: I1011 07:51:37.123209 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:51:37 crc kubenswrapper[5016]: I1011 07:51:37.824492 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:37 crc kubenswrapper[5016]: I1011 07:51:37.953100 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-util\") pod \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " Oct 11 07:51:37 crc kubenswrapper[5016]: I1011 07:51:37.953434 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs4x2\" (UniqueName: \"kubernetes.io/projected/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-kube-api-access-xs4x2\") pod \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " Oct 11 07:51:37 crc kubenswrapper[5016]: I1011 07:51:37.953588 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-bundle\") pod \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\" (UID: \"ef59f675-6d27-45e7-b50c-5ff68f9d41d2\") " Oct 11 07:51:37 crc kubenswrapper[5016]: I1011 07:51:37.954837 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-bundle" (OuterVolumeSpecName: "bundle") pod "ef59f675-6d27-45e7-b50c-5ff68f9d41d2" (UID: "ef59f675-6d27-45e7-b50c-5ff68f9d41d2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:51:37 crc kubenswrapper[5016]: I1011 07:51:37.962888 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-kube-api-access-xs4x2" (OuterVolumeSpecName: "kube-api-access-xs4x2") pod "ef59f675-6d27-45e7-b50c-5ff68f9d41d2" (UID: "ef59f675-6d27-45e7-b50c-5ff68f9d41d2"). InnerVolumeSpecName "kube-api-access-xs4x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:51:38 crc kubenswrapper[5016]: I1011 07:51:38.054632 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs4x2\" (UniqueName: \"kubernetes.io/projected/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-kube-api-access-xs4x2\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:38 crc kubenswrapper[5016]: I1011 07:51:38.054680 5016 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:38 crc kubenswrapper[5016]: I1011 07:51:38.105104 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-util" (OuterVolumeSpecName: "util") pod "ef59f675-6d27-45e7-b50c-5ff68f9d41d2" (UID: "ef59f675-6d27-45e7-b50c-5ff68f9d41d2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:51:38 crc kubenswrapper[5016]: I1011 07:51:38.156343 5016 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ef59f675-6d27-45e7-b50c-5ff68f9d41d2-util\") on node \"crc\" DevicePath \"\"" Oct 11 07:51:38 crc kubenswrapper[5016]: I1011 07:51:38.493621 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" event={"ID":"ef59f675-6d27-45e7-b50c-5ff68f9d41d2","Type":"ContainerDied","Data":"2b4a130bef6adc99558db477136f3e913b36fa25aff08fabfafba93408f23822"} Oct 11 07:51:38 crc kubenswrapper[5016]: I1011 07:51:38.493714 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b4a130bef6adc99558db477136f3e913b36fa25aff08fabfafba93408f23822" Oct 11 07:51:38 crc kubenswrapper[5016]: I1011 07:51:38.493721 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.349290 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr"] Oct 11 07:51:46 crc kubenswrapper[5016]: E1011 07:51:46.350043 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerName="util" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.350055 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerName="util" Oct 11 07:51:46 crc kubenswrapper[5016]: E1011 07:51:46.350071 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6630cb-0062-4461-bf51-c45f7e4e7478" containerName="console" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.350077 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6630cb-0062-4461-bf51-c45f7e4e7478" containerName="console" Oct 11 07:51:46 crc kubenswrapper[5016]: E1011 07:51:46.350090 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerName="extract" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.350097 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerName="extract" Oct 11 07:51:46 crc kubenswrapper[5016]: E1011 07:51:46.350108 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerName="pull" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.350114 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerName="pull" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.350204 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6630cb-0062-4461-bf51-c45f7e4e7478" containerName="console" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.350220 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef59f675-6d27-45e7-b50c-5ff68f9d41d2" containerName="extract" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.350599 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.355814 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.355847 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.356763 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.363983 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.364550 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-pfp8h" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.369533 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr"] Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.452255 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbzzh\" (UniqueName: \"kubernetes.io/projected/1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb-kube-api-access-lbzzh\") pod \"metallb-operator-controller-manager-68db858d44-mvnpr\" (UID: \"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb\") " pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.452327 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb-webhook-cert\") pod \"metallb-operator-controller-manager-68db858d44-mvnpr\" (UID: \"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb\") " pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.452356 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb-apiservice-cert\") pod \"metallb-operator-controller-manager-68db858d44-mvnpr\" (UID: \"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb\") " pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.553346 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbzzh\" (UniqueName: \"kubernetes.io/projected/1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb-kube-api-access-lbzzh\") pod \"metallb-operator-controller-manager-68db858d44-mvnpr\" (UID: \"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb\") " pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.553437 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb-webhook-cert\") pod \"metallb-operator-controller-manager-68db858d44-mvnpr\" (UID: \"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb\") " pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.553472 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb-apiservice-cert\") pod \"metallb-operator-controller-manager-68db858d44-mvnpr\" (UID: \"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb\") " pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.561671 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb-webhook-cert\") pod \"metallb-operator-controller-manager-68db858d44-mvnpr\" (UID: \"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb\") " pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.571403 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbzzh\" (UniqueName: \"kubernetes.io/projected/1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb-kube-api-access-lbzzh\") pod \"metallb-operator-controller-manager-68db858d44-mvnpr\" (UID: \"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb\") " pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.576457 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb-apiservice-cert\") pod \"metallb-operator-controller-manager-68db858d44-mvnpr\" (UID: \"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb\") " pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.603193 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc"] Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.604089 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.607675 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8tfn8" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.607741 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.607691 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.626649 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc"] Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.671788 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.759483 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2r2\" (UniqueName: \"kubernetes.io/projected/09c8faf5-28e5-4d11-ab20-ccf047a5433b-kube-api-access-xj2r2\") pod \"metallb-operator-webhook-server-56f64c4bc6-xrwjc\" (UID: \"09c8faf5-28e5-4d11-ab20-ccf047a5433b\") " pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.759531 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09c8faf5-28e5-4d11-ab20-ccf047a5433b-webhook-cert\") pod \"metallb-operator-webhook-server-56f64c4bc6-xrwjc\" (UID: \"09c8faf5-28e5-4d11-ab20-ccf047a5433b\") " pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.759568 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09c8faf5-28e5-4d11-ab20-ccf047a5433b-apiservice-cert\") pod \"metallb-operator-webhook-server-56f64c4bc6-xrwjc\" (UID: \"09c8faf5-28e5-4d11-ab20-ccf047a5433b\") " pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.862606 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09c8faf5-28e5-4d11-ab20-ccf047a5433b-webhook-cert\") pod \"metallb-operator-webhook-server-56f64c4bc6-xrwjc\" (UID: \"09c8faf5-28e5-4d11-ab20-ccf047a5433b\") " pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.863223 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09c8faf5-28e5-4d11-ab20-ccf047a5433b-apiservice-cert\") pod \"metallb-operator-webhook-server-56f64c4bc6-xrwjc\" (UID: \"09c8faf5-28e5-4d11-ab20-ccf047a5433b\") " pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.863311 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj2r2\" (UniqueName: \"kubernetes.io/projected/09c8faf5-28e5-4d11-ab20-ccf047a5433b-kube-api-access-xj2r2\") pod \"metallb-operator-webhook-server-56f64c4bc6-xrwjc\" (UID: \"09c8faf5-28e5-4d11-ab20-ccf047a5433b\") " pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.870600 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/09c8faf5-28e5-4d11-ab20-ccf047a5433b-apiservice-cert\") pod \"metallb-operator-webhook-server-56f64c4bc6-xrwjc\" (UID: \"09c8faf5-28e5-4d11-ab20-ccf047a5433b\") " pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.886726 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/09c8faf5-28e5-4d11-ab20-ccf047a5433b-webhook-cert\") pod \"metallb-operator-webhook-server-56f64c4bc6-xrwjc\" (UID: \"09c8faf5-28e5-4d11-ab20-ccf047a5433b\") " pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.891308 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj2r2\" (UniqueName: \"kubernetes.io/projected/09c8faf5-28e5-4d11-ab20-ccf047a5433b-kube-api-access-xj2r2\") pod \"metallb-operator-webhook-server-56f64c4bc6-xrwjc\" (UID: \"09c8faf5-28e5-4d11-ab20-ccf047a5433b\") " pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.928706 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:46 crc kubenswrapper[5016]: I1011 07:51:46.939495 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr"] Oct 11 07:51:47 crc kubenswrapper[5016]: I1011 07:51:47.380735 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc"] Oct 11 07:51:47 crc kubenswrapper[5016]: W1011 07:51:47.387764 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09c8faf5_28e5_4d11_ab20_ccf047a5433b.slice/crio-483a4c69a465a542a8287079b4a05cc4a3c72158693e089fa887b1ffef808fc6 WatchSource:0}: Error finding container 483a4c69a465a542a8287079b4a05cc4a3c72158693e089fa887b1ffef808fc6: Status 404 returned error can't find the container with id 483a4c69a465a542a8287079b4a05cc4a3c72158693e089fa887b1ffef808fc6 Oct 11 07:51:47 crc kubenswrapper[5016]: I1011 07:51:47.550089 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" event={"ID":"09c8faf5-28e5-4d11-ab20-ccf047a5433b","Type":"ContainerStarted","Data":"483a4c69a465a542a8287079b4a05cc4a3c72158693e089fa887b1ffef808fc6"} Oct 11 07:51:47 crc kubenswrapper[5016]: I1011 07:51:47.551299 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" event={"ID":"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb","Type":"ContainerStarted","Data":"a4609930fd35fb2024c214792eb8d01aa077c1becc8614a1198ca8293d378ccf"} Oct 11 07:51:54 crc kubenswrapper[5016]: I1011 07:51:54.604789 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" event={"ID":"09c8faf5-28e5-4d11-ab20-ccf047a5433b","Type":"ContainerStarted","Data":"fcd585aa3d8c55a88588568132f9469127512d8464b25ee1e656b5dce984a15f"} Oct 11 07:51:54 crc kubenswrapper[5016]: I1011 07:51:54.605514 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:51:54 crc kubenswrapper[5016]: I1011 07:51:54.606408 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" event={"ID":"1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb","Type":"ContainerStarted","Data":"3b83caff5d009c4db93b97a1805961e70e8f290fe8016f566a81b988b3e4b269"} Oct 11 07:51:54 crc kubenswrapper[5016]: I1011 07:51:54.606681 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:51:54 crc kubenswrapper[5016]: I1011 07:51:54.625083 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" podStartSLOduration=1.820124874 podStartE2EDuration="8.625063568s" podCreationTimestamp="2025-10-11 07:51:46 +0000 UTC" firstStartedPulling="2025-10-11 07:51:47.390384422 +0000 UTC m=+695.290840368" lastFinishedPulling="2025-10-11 07:51:54.195323106 +0000 UTC m=+702.095779062" observedRunningTime="2025-10-11 07:51:54.624204763 +0000 UTC m=+702.524660749" watchObservedRunningTime="2025-10-11 07:51:54.625063568 +0000 UTC m=+702.525519524" Oct 11 07:51:54 crc kubenswrapper[5016]: I1011 07:51:54.652249 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" podStartSLOduration=1.435687431 podStartE2EDuration="8.652226208s" podCreationTimestamp="2025-10-11 07:51:46 +0000 UTC" firstStartedPulling="2025-10-11 07:51:46.963783868 +0000 UTC m=+694.864239824" lastFinishedPulling="2025-10-11 07:51:54.180322655 +0000 UTC m=+702.080778601" observedRunningTime="2025-10-11 07:51:54.647323791 +0000 UTC m=+702.547779747" watchObservedRunningTime="2025-10-11 07:51:54.652226208 +0000 UTC m=+702.552682154" Oct 11 07:52:06 crc kubenswrapper[5016]: I1011 07:52:06.940877 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-56f64c4bc6-xrwjc" Oct 11 07:52:07 crc kubenswrapper[5016]: I1011 07:52:07.122251 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:52:07 crc kubenswrapper[5016]: I1011 07:52:07.122317 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:52:26 crc kubenswrapper[5016]: I1011 07:52:26.674800 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-68db858d44-mvnpr" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.539842 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jpstd"] Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.542396 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.547196 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd"] Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.548222 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.549610 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.549676 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.549850 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-znw8k" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.549974 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.560067 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd"] Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.614629 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-frr-sockets\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.614668 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-metrics\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.614707 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7xk\" (UniqueName: \"kubernetes.io/projected/1746bd38-b574-4552-b9b2-e5d80ba72acf-kube-api-access-tl7xk\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.614735 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8be39f07-b456-4122-b5dd-3f02d8123d0c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-sftjd\" (UID: \"8be39f07-b456-4122-b5dd-3f02d8123d0c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.614761 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1746bd38-b574-4552-b9b2-e5d80ba72acf-metrics-certs\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.614778 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-frr-conf\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.614807 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhch\" (UniqueName: \"kubernetes.io/projected/8be39f07-b456-4122-b5dd-3f02d8123d0c-kube-api-access-7xhch\") pod \"frr-k8s-webhook-server-64bf5d555-sftjd\" (UID: \"8be39f07-b456-4122-b5dd-3f02d8123d0c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.614902 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1746bd38-b574-4552-b9b2-e5d80ba72acf-frr-startup\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.615032 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-reloader\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.644941 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-zjs4n"] Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.646067 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.647567 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.647643 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.647929 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.648225 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bhkgk" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.657906 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-phqqb"] Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.662935 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.664982 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.675548 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-phqqb"] Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716147 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-frr-conf\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716234 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bb9e00d1-3bae-477f-b65b-9822fd6a5999-metallb-excludel2\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716295 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhch\" (UniqueName: \"kubernetes.io/projected/8be39f07-b456-4122-b5dd-3f02d8123d0c-kube-api-access-7xhch\") pod \"frr-k8s-webhook-server-64bf5d555-sftjd\" (UID: \"8be39f07-b456-4122-b5dd-3f02d8123d0c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716323 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-cert\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716369 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc9rp\" (UniqueName: \"kubernetes.io/projected/bb9e00d1-3bae-477f-b65b-9822fd6a5999-kube-api-access-pc9rp\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716409 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1746bd38-b574-4552-b9b2-e5d80ba72acf-frr-startup\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716458 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-reloader\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716477 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-frr-sockets\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716490 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-metrics\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716545 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-metrics-certs\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716567 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7xk\" (UniqueName: \"kubernetes.io/projected/1746bd38-b574-4552-b9b2-e5d80ba72acf-kube-api-access-tl7xk\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716616 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-frr-conf\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716682 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m2bs\" (UniqueName: \"kubernetes.io/projected/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-kube-api-access-6m2bs\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716708 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-metrics-certs\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716724 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8be39f07-b456-4122-b5dd-3f02d8123d0c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-sftjd\" (UID: \"8be39f07-b456-4122-b5dd-3f02d8123d0c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716786 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1746bd38-b574-4552-b9b2-e5d80ba72acf-metrics-certs\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716852 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-memberlist\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716878 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-reloader\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.716956 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-frr-sockets\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.717120 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1746bd38-b574-4552-b9b2-e5d80ba72acf-metrics\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.717459 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1746bd38-b574-4552-b9b2-e5d80ba72acf-frr-startup\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.722214 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1746bd38-b574-4552-b9b2-e5d80ba72acf-metrics-certs\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.725309 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8be39f07-b456-4122-b5dd-3f02d8123d0c-cert\") pod \"frr-k8s-webhook-server-64bf5d555-sftjd\" (UID: \"8be39f07-b456-4122-b5dd-3f02d8123d0c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.732732 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhch\" (UniqueName: \"kubernetes.io/projected/8be39f07-b456-4122-b5dd-3f02d8123d0c-kube-api-access-7xhch\") pod \"frr-k8s-webhook-server-64bf5d555-sftjd\" (UID: \"8be39f07-b456-4122-b5dd-3f02d8123d0c\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.738195 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7xk\" (UniqueName: \"kubernetes.io/projected/1746bd38-b574-4552-b9b2-e5d80ba72acf-kube-api-access-tl7xk\") pod \"frr-k8s-jpstd\" (UID: \"1746bd38-b574-4552-b9b2-e5d80ba72acf\") " pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.817946 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-metrics-certs\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.818012 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m2bs\" (UniqueName: \"kubernetes.io/projected/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-kube-api-access-6m2bs\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.818042 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-metrics-certs\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.818079 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-memberlist\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: E1011 07:52:27.818095 5016 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.818122 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bb9e00d1-3bae-477f-b65b-9822fd6a5999-metallb-excludel2\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: E1011 07:52:27.818169 5016 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 07:52:27 crc kubenswrapper[5016]: E1011 07:52:27.818243 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-metrics-certs podName:cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc nodeName:}" failed. No retries permitted until 2025-10-11 07:52:28.318205307 +0000 UTC m=+736.218661253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-metrics-certs") pod "controller-68d546b9d8-phqqb" (UID: "cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc") : secret "controller-certs-secret" not found Oct 11 07:52:27 crc kubenswrapper[5016]: E1011 07:52:27.818238 5016 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.818390 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-cert\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:27 crc kubenswrapper[5016]: E1011 07:52:27.818430 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-metrics-certs podName:bb9e00d1-3bae-477f-b65b-9822fd6a5999 nodeName:}" failed. No retries permitted until 2025-10-11 07:52:28.318405272 +0000 UTC m=+736.218861218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-metrics-certs") pod "speaker-zjs4n" (UID: "bb9e00d1-3bae-477f-b65b-9822fd6a5999") : secret "speaker-certs-secret" not found Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.818458 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc9rp\" (UniqueName: \"kubernetes.io/projected/bb9e00d1-3bae-477f-b65b-9822fd6a5999-kube-api-access-pc9rp\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: E1011 07:52:27.818639 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-memberlist podName:bb9e00d1-3bae-477f-b65b-9822fd6a5999 nodeName:}" failed. No retries permitted until 2025-10-11 07:52:28.318601278 +0000 UTC m=+736.219057214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-memberlist") pod "speaker-zjs4n" (UID: "bb9e00d1-3bae-477f-b65b-9822fd6a5999") : secret "metallb-memberlist" not found Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.818945 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bb9e00d1-3bae-477f-b65b-9822fd6a5999-metallb-excludel2\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.820236 5016 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.832134 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-cert\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.834309 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m2bs\" (UniqueName: \"kubernetes.io/projected/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-kube-api-access-6m2bs\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.841331 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc9rp\" (UniqueName: \"kubernetes.io/projected/bb9e00d1-3bae-477f-b65b-9822fd6a5999-kube-api-access-pc9rp\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.864076 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:27 crc kubenswrapper[5016]: I1011 07:52:27.876704 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.268647 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd"] Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.323299 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-metrics-certs\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.323358 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-metrics-certs\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.323389 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-memberlist\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:28 crc kubenswrapper[5016]: E1011 07:52:28.323545 5016 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 11 07:52:28 crc kubenswrapper[5016]: E1011 07:52:28.323597 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-memberlist podName:bb9e00d1-3bae-477f-b65b-9822fd6a5999 nodeName:}" failed. No retries permitted until 2025-10-11 07:52:29.323582551 +0000 UTC m=+737.224038487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-memberlist") pod "speaker-zjs4n" (UID: "bb9e00d1-3bae-477f-b65b-9822fd6a5999") : secret "metallb-memberlist" not found Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.328960 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-metrics-certs\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.329135 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc-metrics-certs\") pod \"controller-68d546b9d8-phqqb\" (UID: \"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc\") " pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.584696 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.814664 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerStarted","Data":"38ae0fe20f7607ed0e53f8beca5154b13ca03a15dcd4ddc37a802d838ec7d59d"} Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.815531 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" event={"ID":"8be39f07-b456-4122-b5dd-3f02d8123d0c","Type":"ContainerStarted","Data":"20a9d543814d87c3acb7632c5a8587e1b20dc5c9286fc453b3dd4bd559d19d35"} Oct 11 07:52:28 crc kubenswrapper[5016]: I1011 07:52:28.815775 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-phqqb"] Oct 11 07:52:28 crc kubenswrapper[5016]: W1011 07:52:28.822466 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfb3d201_9e6b_47ac_ad15_6101ccb2c6dc.slice/crio-db0364e48a243bb73e01fe198c10a0de772c567db0a55762519da93f053a44da WatchSource:0}: Error finding container db0364e48a243bb73e01fe198c10a0de772c567db0a55762519da93f053a44da: Status 404 returned error can't find the container with id db0364e48a243bb73e01fe198c10a0de772c567db0a55762519da93f053a44da Oct 11 07:52:29 crc kubenswrapper[5016]: I1011 07:52:29.336725 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-memberlist\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:29 crc kubenswrapper[5016]: I1011 07:52:29.359943 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bb9e00d1-3bae-477f-b65b-9822fd6a5999-memberlist\") pod \"speaker-zjs4n\" (UID: \"bb9e00d1-3bae-477f-b65b-9822fd6a5999\") " pod="metallb-system/speaker-zjs4n" Oct 11 07:52:29 crc kubenswrapper[5016]: I1011 07:52:29.466924 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zjs4n" Oct 11 07:52:29 crc kubenswrapper[5016]: W1011 07:52:29.485546 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb9e00d1_3bae_477f_b65b_9822fd6a5999.slice/crio-a3db1bd71bfb24668f017b426515ac29239c92f2054cb54bc67bb5e1f4ae8261 WatchSource:0}: Error finding container a3db1bd71bfb24668f017b426515ac29239c92f2054cb54bc67bb5e1f4ae8261: Status 404 returned error can't find the container with id a3db1bd71bfb24668f017b426515ac29239c92f2054cb54bc67bb5e1f4ae8261 Oct 11 07:52:29 crc kubenswrapper[5016]: I1011 07:52:29.823980 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zjs4n" event={"ID":"bb9e00d1-3bae-477f-b65b-9822fd6a5999","Type":"ContainerStarted","Data":"50372e8a32669f6af1637b9fdd4084cb549f695b7c65a4322d375d690715e2f1"} Oct 11 07:52:29 crc kubenswrapper[5016]: I1011 07:52:29.824023 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zjs4n" event={"ID":"bb9e00d1-3bae-477f-b65b-9822fd6a5999","Type":"ContainerStarted","Data":"a3db1bd71bfb24668f017b426515ac29239c92f2054cb54bc67bb5e1f4ae8261"} Oct 11 07:52:29 crc kubenswrapper[5016]: I1011 07:52:29.827579 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-phqqb" event={"ID":"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc","Type":"ContainerStarted","Data":"166a698d8100eed79f8848b9f8fc6268d3a7385d760af176d59d99601cb91449"} Oct 11 07:52:29 crc kubenswrapper[5016]: I1011 07:52:29.827643 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-phqqb" event={"ID":"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc","Type":"ContainerStarted","Data":"ecddb6569468d81ec0dd250237a3fef28b6ab97dd9451a6c60a8813143b9367c"} Oct 11 07:52:29 crc kubenswrapper[5016]: I1011 07:52:29.827682 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-phqqb" event={"ID":"cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc","Type":"ContainerStarted","Data":"db0364e48a243bb73e01fe198c10a0de772c567db0a55762519da93f053a44da"} Oct 11 07:52:29 crc kubenswrapper[5016]: I1011 07:52:29.827733 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:30 crc kubenswrapper[5016]: I1011 07:52:30.846372 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zjs4n" event={"ID":"bb9e00d1-3bae-477f-b65b-9822fd6a5999","Type":"ContainerStarted","Data":"16eea609a7994e6a88908d70fcc8490e196deb8990512fa7c1010407b4f50801"} Oct 11 07:52:30 crc kubenswrapper[5016]: I1011 07:52:30.859382 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-phqqb" podStartSLOduration=3.859364237 podStartE2EDuration="3.859364237s" podCreationTimestamp="2025-10-11 07:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:52:29.841964981 +0000 UTC m=+737.742420917" watchObservedRunningTime="2025-10-11 07:52:30.859364237 +0000 UTC m=+738.759820183" Oct 11 07:52:30 crc kubenswrapper[5016]: I1011 07:52:30.859886 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-zjs4n" podStartSLOduration=3.859882491 podStartE2EDuration="3.859882491s" podCreationTimestamp="2025-10-11 07:52:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:52:30.859514681 +0000 UTC m=+738.759970627" watchObservedRunningTime="2025-10-11 07:52:30.859882491 +0000 UTC m=+738.760338437" Oct 11 07:52:31 crc kubenswrapper[5016]: I1011 07:52:31.850907 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-zjs4n" Oct 11 07:52:36 crc kubenswrapper[5016]: I1011 07:52:36.896836 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerDied","Data":"37410e25c6f0a9f0fc27df2eb007bdb6d544874d85c8e46c6bf7a1c08b9e46bf"} Oct 11 07:52:36 crc kubenswrapper[5016]: I1011 07:52:36.896641 5016 generic.go:334] "Generic (PLEG): container finished" podID="1746bd38-b574-4552-b9b2-e5d80ba72acf" containerID="37410e25c6f0a9f0fc27df2eb007bdb6d544874d85c8e46c6bf7a1c08b9e46bf" exitCode=0 Oct 11 07:52:36 crc kubenswrapper[5016]: I1011 07:52:36.899764 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" event={"ID":"8be39f07-b456-4122-b5dd-3f02d8123d0c","Type":"ContainerStarted","Data":"3f726f681f1cb4832302d3b73ce461446d7f876283c362e4e8224c0232cc55a5"} Oct 11 07:52:36 crc kubenswrapper[5016]: I1011 07:52:36.899973 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:36 crc kubenswrapper[5016]: I1011 07:52:36.987214 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" podStartSLOduration=2.4079920599999998 podStartE2EDuration="9.987195449s" podCreationTimestamp="2025-10-11 07:52:27 +0000 UTC" firstStartedPulling="2025-10-11 07:52:28.273199037 +0000 UTC m=+736.173654983" lastFinishedPulling="2025-10-11 07:52:35.852402426 +0000 UTC m=+743.752858372" observedRunningTime="2025-10-11 07:52:36.983973681 +0000 UTC m=+744.884429617" watchObservedRunningTime="2025-10-11 07:52:36.987195449 +0000 UTC m=+744.887651395" Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.122052 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.122110 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.122151 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.122714 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e47e6adcac812a126122f7057fc0b9abd8d456e8565df449156e69a78cd7a4b"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.122771 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://9e47e6adcac812a126122f7057fc0b9abd8d456e8565df449156e69a78cd7a4b" gracePeriod=600 Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.921642 5016 generic.go:334] "Generic (PLEG): container finished" podID="1746bd38-b574-4552-b9b2-e5d80ba72acf" containerID="4f655d0035178145d8a73b8d7cd07c766888704c25b07806d05eb8081129e470" exitCode=0 Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.921792 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerDied","Data":"4f655d0035178145d8a73b8d7cd07c766888704c25b07806d05eb8081129e470"} Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.933209 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="9e47e6adcac812a126122f7057fc0b9abd8d456e8565df449156e69a78cd7a4b" exitCode=0 Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.934111 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"9e47e6adcac812a126122f7057fc0b9abd8d456e8565df449156e69a78cd7a4b"} Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.934168 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"265caf0315ed7d9cc490abb97692bb40c37bc7e9af0dd0d10a990157231f7f84"} Oct 11 07:52:37 crc kubenswrapper[5016]: I1011 07:52:37.934205 5016 scope.go:117] "RemoveContainer" containerID="0bb6e95efc1267f312ef77f8e915572b9364b3b7288f25fafcc7853b98141761" Oct 11 07:52:38 crc kubenswrapper[5016]: I1011 07:52:38.592547 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-phqqb" Oct 11 07:52:38 crc kubenswrapper[5016]: I1011 07:52:38.942347 5016 generic.go:334] "Generic (PLEG): container finished" podID="1746bd38-b574-4552-b9b2-e5d80ba72acf" containerID="6991887bc3828a3a1e22b6f13bd1ae9b721f78f79907c9bb2292d53bef2f2da9" exitCode=0 Oct 11 07:52:38 crc kubenswrapper[5016]: I1011 07:52:38.942377 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerDied","Data":"6991887bc3828a3a1e22b6f13bd1ae9b721f78f79907c9bb2292d53bef2f2da9"} Oct 11 07:52:39 crc kubenswrapper[5016]: I1011 07:52:39.472761 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-zjs4n" Oct 11 07:52:39 crc kubenswrapper[5016]: I1011 07:52:39.962025 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerStarted","Data":"332d23b9e3238bcd6e60cddc102b0152bbef04da0d295c008a038bd9f6578144"} Oct 11 07:52:39 crc kubenswrapper[5016]: I1011 07:52:39.962397 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerStarted","Data":"e6e619b751906dc63239faf9999ae0a36f60bde8757a9c504e87818715d30382"} Oct 11 07:52:39 crc kubenswrapper[5016]: I1011 07:52:39.962414 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerStarted","Data":"a42e50f80eb56bb0acc62889a939959cd03476ac9767553f656b1f5156cb5a10"} Oct 11 07:52:39 crc kubenswrapper[5016]: I1011 07:52:39.962426 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerStarted","Data":"544dd413a084ab78bde89fcd6d7c38b29d0de0804a4e465b6e0f5b79430e7fee"} Oct 11 07:52:39 crc kubenswrapper[5016]: I1011 07:52:39.962435 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerStarted","Data":"904f0c7df5f665dce31891a62535bfb78c3a51279f7d39f33c8802628590d9a6"} Oct 11 07:52:40 crc kubenswrapper[5016]: I1011 07:52:40.980294 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jpstd" event={"ID":"1746bd38-b574-4552-b9b2-e5d80ba72acf","Type":"ContainerStarted","Data":"256822273bdfb3ec8c6f30a725fceb782aa94ddd3329aecfa1a2443dd9b2fac3"} Oct 11 07:52:40 crc kubenswrapper[5016]: I1011 07:52:40.980486 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.240369 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jpstd" podStartSLOduration=7.427389081 podStartE2EDuration="15.240351124s" podCreationTimestamp="2025-10-11 07:52:27 +0000 UTC" firstStartedPulling="2025-10-11 07:52:28.0068055 +0000 UTC m=+735.907261446" lastFinishedPulling="2025-10-11 07:52:35.819767543 +0000 UTC m=+743.720223489" observedRunningTime="2025-10-11 07:52:41.016559644 +0000 UTC m=+748.917015600" watchObservedRunningTime="2025-10-11 07:52:42.240351124 +0000 UTC m=+750.140807070" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.244575 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vrbxm"] Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.245973 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vrbxm" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.255323 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.255693 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jxghm" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.255933 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.269055 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vrbxm"] Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.376488 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9xfv\" (UniqueName: \"kubernetes.io/projected/30c2b927-dca4-4742-9465-ba89109d98c8-kube-api-access-h9xfv\") pod \"openstack-operator-index-vrbxm\" (UID: \"30c2b927-dca4-4742-9465-ba89109d98c8\") " pod="openstack-operators/openstack-operator-index-vrbxm" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.477889 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9xfv\" (UniqueName: \"kubernetes.io/projected/30c2b927-dca4-4742-9465-ba89109d98c8-kube-api-access-h9xfv\") pod \"openstack-operator-index-vrbxm\" (UID: \"30c2b927-dca4-4742-9465-ba89109d98c8\") " pod="openstack-operators/openstack-operator-index-vrbxm" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.495311 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9xfv\" (UniqueName: \"kubernetes.io/projected/30c2b927-dca4-4742-9465-ba89109d98c8-kube-api-access-h9xfv\") pod \"openstack-operator-index-vrbxm\" (UID: \"30c2b927-dca4-4742-9465-ba89109d98c8\") " pod="openstack-operators/openstack-operator-index-vrbxm" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.568874 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vrbxm" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.777178 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vrbxm"] Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.864187 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.919810 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:42 crc kubenswrapper[5016]: I1011 07:52:42.991218 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vrbxm" event={"ID":"30c2b927-dca4-4742-9465-ba89109d98c8","Type":"ContainerStarted","Data":"2873ec607895af9e0b83e5cd91f4902e8d89aa1b2ae3f730071918f4df4f70d8"} Oct 11 07:52:44 crc kubenswrapper[5016]: I1011 07:52:44.000297 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vrbxm" event={"ID":"30c2b927-dca4-4742-9465-ba89109d98c8","Type":"ContainerStarted","Data":"d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230"} Oct 11 07:52:44 crc kubenswrapper[5016]: I1011 07:52:44.026311 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vrbxm" podStartSLOduration=1.273098876 podStartE2EDuration="2.026258143s" podCreationTimestamp="2025-10-11 07:52:42 +0000 UTC" firstStartedPulling="2025-10-11 07:52:42.795872635 +0000 UTC m=+750.696328581" lastFinishedPulling="2025-10-11 07:52:43.549031902 +0000 UTC m=+751.449487848" observedRunningTime="2025-10-11 07:52:44.01690319 +0000 UTC m=+751.917359146" watchObservedRunningTime="2025-10-11 07:52:44.026258143 +0000 UTC m=+751.926714139" Oct 11 07:52:45 crc kubenswrapper[5016]: I1011 07:52:45.626443 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vrbxm"] Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.012107 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vrbxm" podUID="30c2b927-dca4-4742-9465-ba89109d98c8" containerName="registry-server" containerID="cri-o://d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230" gracePeriod=2 Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.235313 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-l67lz"] Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.236218 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l67lz" Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.241128 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55jxt\" (UniqueName: \"kubernetes.io/projected/509e4b22-b583-43ca-9c36-bd2ce2b7e753-kube-api-access-55jxt\") pod \"openstack-operator-index-l67lz\" (UID: \"509e4b22-b583-43ca-9c36-bd2ce2b7e753\") " pod="openstack-operators/openstack-operator-index-l67lz" Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.241647 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l67lz"] Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.344177 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55jxt\" (UniqueName: \"kubernetes.io/projected/509e4b22-b583-43ca-9c36-bd2ce2b7e753-kube-api-access-55jxt\") pod \"openstack-operator-index-l67lz\" (UID: \"509e4b22-b583-43ca-9c36-bd2ce2b7e753\") " pod="openstack-operators/openstack-operator-index-l67lz" Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.380620 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55jxt\" (UniqueName: \"kubernetes.io/projected/509e4b22-b583-43ca-9c36-bd2ce2b7e753-kube-api-access-55jxt\") pod \"openstack-operator-index-l67lz\" (UID: \"509e4b22-b583-43ca-9c36-bd2ce2b7e753\") " pod="openstack-operators/openstack-operator-index-l67lz" Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.406824 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vrbxm" Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.546457 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9xfv\" (UniqueName: \"kubernetes.io/projected/30c2b927-dca4-4742-9465-ba89109d98c8-kube-api-access-h9xfv\") pod \"30c2b927-dca4-4742-9465-ba89109d98c8\" (UID: \"30c2b927-dca4-4742-9465-ba89109d98c8\") " Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.550470 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30c2b927-dca4-4742-9465-ba89109d98c8-kube-api-access-h9xfv" (OuterVolumeSpecName: "kube-api-access-h9xfv") pod "30c2b927-dca4-4742-9465-ba89109d98c8" (UID: "30c2b927-dca4-4742-9465-ba89109d98c8"). InnerVolumeSpecName "kube-api-access-h9xfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.573816 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l67lz" Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.604400 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwp6t"] Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.604613 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" podUID="8424bee6-8168-4c9f-b70e-5523e1990bcd" containerName="controller-manager" containerID="cri-o://27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa" gracePeriod=30 Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.648131 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9xfv\" (UniqueName: \"kubernetes.io/projected/30c2b927-dca4-4742-9465-ba89109d98c8-kube-api-access-h9xfv\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.712716 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb"] Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.713211 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" podUID="699f1f0c-fc1d-4599-97a8-a135238977b4" containerName="route-controller-manager" containerID="cri-o://5912655979a4573f113da293667810b572866ece56e36aa6294ddcbe7c3435da" gracePeriod=30 Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.851634 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l67lz"] Oct 11 07:52:46 crc kubenswrapper[5016]: W1011 07:52:46.859391 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod509e4b22_b583_43ca_9c36_bd2ce2b7e753.slice/crio-556e8ec0846d16a721b880058556246f6b29cae7165e0a605426fbaa6d8061ed WatchSource:0}: Error finding container 556e8ec0846d16a721b880058556246f6b29cae7165e0a605426fbaa6d8061ed: Status 404 returned error can't find the container with id 556e8ec0846d16a721b880058556246f6b29cae7165e0a605426fbaa6d8061ed Oct 11 07:52:46 crc kubenswrapper[5016]: I1011 07:52:46.985695 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.018385 5016 generic.go:334] "Generic (PLEG): container finished" podID="699f1f0c-fc1d-4599-97a8-a135238977b4" containerID="5912655979a4573f113da293667810b572866ece56e36aa6294ddcbe7c3435da" exitCode=0 Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.018449 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" event={"ID":"699f1f0c-fc1d-4599-97a8-a135238977b4","Type":"ContainerDied","Data":"5912655979a4573f113da293667810b572866ece56e36aa6294ddcbe7c3435da"} Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.020141 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l67lz" event={"ID":"509e4b22-b583-43ca-9c36-bd2ce2b7e753","Type":"ContainerStarted","Data":"556e8ec0846d16a721b880058556246f6b29cae7165e0a605426fbaa6d8061ed"} Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.022117 5016 generic.go:334] "Generic (PLEG): container finished" podID="8424bee6-8168-4c9f-b70e-5523e1990bcd" containerID="27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa" exitCode=0 Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.022190 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" event={"ID":"8424bee6-8168-4c9f-b70e-5523e1990bcd","Type":"ContainerDied","Data":"27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa"} Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.022214 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" event={"ID":"8424bee6-8168-4c9f-b70e-5523e1990bcd","Type":"ContainerDied","Data":"109b53f176ad5b7297f23394f60c2748a53cbd30abb9f5368d75e90583b86294"} Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.022231 5016 scope.go:117] "RemoveContainer" containerID="27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.022735 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-gwp6t" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.024704 5016 generic.go:334] "Generic (PLEG): container finished" podID="30c2b927-dca4-4742-9465-ba89109d98c8" containerID="d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230" exitCode=0 Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.024756 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vrbxm" event={"ID":"30c2b927-dca4-4742-9465-ba89109d98c8","Type":"ContainerDied","Data":"d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230"} Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.024783 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vrbxm" event={"ID":"30c2b927-dca4-4742-9465-ba89109d98c8","Type":"ContainerDied","Data":"2873ec607895af9e0b83e5cd91f4902e8d89aa1b2ae3f730071918f4df4f70d8"} Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.024829 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vrbxm" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.049541 5016 scope.go:117] "RemoveContainer" containerID="27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.050621 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:52:47 crc kubenswrapper[5016]: E1011 07:52:47.051604 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa\": container with ID starting with 27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa not found: ID does not exist" containerID="27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.051662 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa"} err="failed to get container status \"27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa\": rpc error: code = NotFound desc = could not find container \"27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa\": container with ID starting with 27e0d60c457ad8d031f9a08665cb4e549230e2b02c660c9b3c7c23449de35baa not found: ID does not exist" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.051689 5016 scope.go:117] "RemoveContainer" containerID="d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.055338 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-proxy-ca-bundles\") pod \"8424bee6-8168-4c9f-b70e-5523e1990bcd\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.055378 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-config\") pod \"699f1f0c-fc1d-4599-97a8-a135238977b4\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.056430 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-config" (OuterVolumeSpecName: "config") pod "699f1f0c-fc1d-4599-97a8-a135238977b4" (UID: "699f1f0c-fc1d-4599-97a8-a135238977b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.059195 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8424bee6-8168-4c9f-b70e-5523e1990bcd" (UID: "8424bee6-8168-4c9f-b70e-5523e1990bcd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.065928 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vrbxm"] Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.075966 5016 scope.go:117] "RemoveContainer" containerID="d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230" Oct 11 07:52:47 crc kubenswrapper[5016]: E1011 07:52:47.077742 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230\": container with ID starting with d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230 not found: ID does not exist" containerID="d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.077787 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230"} err="failed to get container status \"d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230\": rpc error: code = NotFound desc = could not find container \"d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230\": container with ID starting with d9178d907af84f57b8ca3ee41a56da9a1a8741bbd01015687a26b8b766458230 not found: ID does not exist" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.085754 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vrbxm"] Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.140129 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30c2b927-dca4-4742-9465-ba89109d98c8" path="/var/lib/kubelet/pods/30c2b927-dca4-4742-9465-ba89109d98c8/volumes" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.156301 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5l9k\" (UniqueName: \"kubernetes.io/projected/699f1f0c-fc1d-4599-97a8-a135238977b4-kube-api-access-l5l9k\") pod \"699f1f0c-fc1d-4599-97a8-a135238977b4\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.156349 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-config\") pod \"8424bee6-8168-4c9f-b70e-5523e1990bcd\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.156383 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzbdj\" (UniqueName: \"kubernetes.io/projected/8424bee6-8168-4c9f-b70e-5523e1990bcd-kube-api-access-lzbdj\") pod \"8424bee6-8168-4c9f-b70e-5523e1990bcd\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.156557 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/699f1f0c-fc1d-4599-97a8-a135238977b4-serving-cert\") pod \"699f1f0c-fc1d-4599-97a8-a135238977b4\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.156614 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-client-ca\") pod \"8424bee6-8168-4c9f-b70e-5523e1990bcd\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.156702 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-client-ca\") pod \"699f1f0c-fc1d-4599-97a8-a135238977b4\" (UID: \"699f1f0c-fc1d-4599-97a8-a135238977b4\") " Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.156741 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8424bee6-8168-4c9f-b70e-5523e1990bcd-serving-cert\") pod \"8424bee6-8168-4c9f-b70e-5523e1990bcd\" (UID: \"8424bee6-8168-4c9f-b70e-5523e1990bcd\") " Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.156999 5016 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.157021 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.157579 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-client-ca" (OuterVolumeSpecName: "client-ca") pod "8424bee6-8168-4c9f-b70e-5523e1990bcd" (UID: "8424bee6-8168-4c9f-b70e-5523e1990bcd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.157874 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "699f1f0c-fc1d-4599-97a8-a135238977b4" (UID: "699f1f0c-fc1d-4599-97a8-a135238977b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.157992 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-config" (OuterVolumeSpecName: "config") pod "8424bee6-8168-4c9f-b70e-5523e1990bcd" (UID: "8424bee6-8168-4c9f-b70e-5523e1990bcd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.162731 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699f1f0c-fc1d-4599-97a8-a135238977b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "699f1f0c-fc1d-4599-97a8-a135238977b4" (UID: "699f1f0c-fc1d-4599-97a8-a135238977b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.163216 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8424bee6-8168-4c9f-b70e-5523e1990bcd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8424bee6-8168-4c9f-b70e-5523e1990bcd" (UID: "8424bee6-8168-4c9f-b70e-5523e1990bcd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.163885 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699f1f0c-fc1d-4599-97a8-a135238977b4-kube-api-access-l5l9k" (OuterVolumeSpecName: "kube-api-access-l5l9k") pod "699f1f0c-fc1d-4599-97a8-a135238977b4" (UID: "699f1f0c-fc1d-4599-97a8-a135238977b4"). InnerVolumeSpecName "kube-api-access-l5l9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.166210 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8424bee6-8168-4c9f-b70e-5523e1990bcd-kube-api-access-lzbdj" (OuterVolumeSpecName: "kube-api-access-lzbdj") pod "8424bee6-8168-4c9f-b70e-5523e1990bcd" (UID: "8424bee6-8168-4c9f-b70e-5523e1990bcd"). InnerVolumeSpecName "kube-api-access-lzbdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.257947 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/699f1f0c-fc1d-4599-97a8-a135238977b4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.258196 5016 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-client-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.258253 5016 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/699f1f0c-fc1d-4599-97a8-a135238977b4-client-ca\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.258305 5016 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8424bee6-8168-4c9f-b70e-5523e1990bcd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.258381 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5l9k\" (UniqueName: \"kubernetes.io/projected/699f1f0c-fc1d-4599-97a8-a135238977b4-kube-api-access-l5l9k\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.258498 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8424bee6-8168-4c9f-b70e-5523e1990bcd-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.258558 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzbdj\" (UniqueName: \"kubernetes.io/projected/8424bee6-8168-4c9f-b70e-5523e1990bcd-kube-api-access-lzbdj\") on node \"crc\" DevicePath \"\"" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.349581 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwp6t"] Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.354804 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-gwp6t"] Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.813911 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl"] Oct 11 07:52:47 crc kubenswrapper[5016]: E1011 07:52:47.814186 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8424bee6-8168-4c9f-b70e-5523e1990bcd" containerName="controller-manager" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.814204 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="8424bee6-8168-4c9f-b70e-5523e1990bcd" containerName="controller-manager" Oct 11 07:52:47 crc kubenswrapper[5016]: E1011 07:52:47.814227 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30c2b927-dca4-4742-9465-ba89109d98c8" containerName="registry-server" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.814236 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="30c2b927-dca4-4742-9465-ba89109d98c8" containerName="registry-server" Oct 11 07:52:47 crc kubenswrapper[5016]: E1011 07:52:47.814252 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="699f1f0c-fc1d-4599-97a8-a135238977b4" containerName="route-controller-manager" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.814265 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="699f1f0c-fc1d-4599-97a8-a135238977b4" containerName="route-controller-manager" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.814413 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="30c2b927-dca4-4742-9465-ba89109d98c8" containerName="registry-server" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.814429 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="8424bee6-8168-4c9f-b70e-5523e1990bcd" containerName="controller-manager" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.814459 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="699f1f0c-fc1d-4599-97a8-a135238977b4" containerName="route-controller-manager" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.815061 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.827054 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl"] Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.867836 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9e856ca-c719-4e51-8c93-262b602e3fa6-serving-cert\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.867969 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg8dh\" (UniqueName: \"kubernetes.io/projected/d9e856ca-c719-4e51-8c93-262b602e3fa6-kube-api-access-cg8dh\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.868063 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e856ca-c719-4e51-8c93-262b602e3fa6-config\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.868095 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9e856ca-c719-4e51-8c93-262b602e3fa6-client-ca\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.882225 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-sftjd" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.969499 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg8dh\" (UniqueName: \"kubernetes.io/projected/d9e856ca-c719-4e51-8c93-262b602e3fa6-kube-api-access-cg8dh\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.969559 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e856ca-c719-4e51-8c93-262b602e3fa6-config\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.969596 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9e856ca-c719-4e51-8c93-262b602e3fa6-client-ca\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.969624 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9e856ca-c719-4e51-8c93-262b602e3fa6-serving-cert\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.970914 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d9e856ca-c719-4e51-8c93-262b602e3fa6-client-ca\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.970985 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9e856ca-c719-4e51-8c93-262b602e3fa6-config\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.974603 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9e856ca-c719-4e51-8c93-262b602e3fa6-serving-cert\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:47 crc kubenswrapper[5016]: I1011 07:52:47.996801 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg8dh\" (UniqueName: \"kubernetes.io/projected/d9e856ca-c719-4e51-8c93-262b602e3fa6-kube-api-access-cg8dh\") pod \"route-controller-manager-77fd5c6857-jpwpl\" (UID: \"d9e856ca-c719-4e51-8c93-262b602e3fa6\") " pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.033862 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" event={"ID":"699f1f0c-fc1d-4599-97a8-a135238977b4","Type":"ContainerDied","Data":"cf292dd6de08a66d854924eb307bfc7e9d0354ae398e767686890c49df9f4e52"} Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.034303 5016 scope.go:117] "RemoveContainer" containerID="5912655979a4573f113da293667810b572866ece56e36aa6294ddcbe7c3435da" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.034037 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.038277 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l67lz" event={"ID":"509e4b22-b583-43ca-9c36-bd2ce2b7e753","Type":"ContainerStarted","Data":"64e9f1d4575700b9d4fcc4abb4224e4294505b13a1134b73b1f44c6dd39f2235"} Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.056056 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-l67lz" podStartSLOduration=1.512255778 podStartE2EDuration="2.05602406s" podCreationTimestamp="2025-10-11 07:52:46 +0000 UTC" firstStartedPulling="2025-10-11 07:52:46.862827608 +0000 UTC m=+754.763283554" lastFinishedPulling="2025-10-11 07:52:47.40659589 +0000 UTC m=+755.307051836" observedRunningTime="2025-10-11 07:52:48.055050594 +0000 UTC m=+755.955506540" watchObservedRunningTime="2025-10-11 07:52:48.05602406 +0000 UTC m=+755.956480006" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.075894 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb"] Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.079434 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sqrgb"] Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.133213 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.368258 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-694d48b6db-7db5n"] Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.370263 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.374100 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.374121 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.374456 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.374756 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.375701 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.376968 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.381590 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-694d48b6db-7db5n"] Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.386130 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.478188 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/154a3a8e-2384-4300-88ff-7f04ed9d2f25-client-ca\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.478440 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/154a3a8e-2384-4300-88ff-7f04ed9d2f25-config\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.478611 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckc4t\" (UniqueName: \"kubernetes.io/projected/154a3a8e-2384-4300-88ff-7f04ed9d2f25-kube-api-access-ckc4t\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.478679 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/154a3a8e-2384-4300-88ff-7f04ed9d2f25-serving-cert\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.478821 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/154a3a8e-2384-4300-88ff-7f04ed9d2f25-proxy-ca-bundles\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.524481 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl"] Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.579711 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckc4t\" (UniqueName: \"kubernetes.io/projected/154a3a8e-2384-4300-88ff-7f04ed9d2f25-kube-api-access-ckc4t\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.579969 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/154a3a8e-2384-4300-88ff-7f04ed9d2f25-serving-cert\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.580074 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/154a3a8e-2384-4300-88ff-7f04ed9d2f25-proxy-ca-bundles\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.580208 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/154a3a8e-2384-4300-88ff-7f04ed9d2f25-client-ca\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.580304 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/154a3a8e-2384-4300-88ff-7f04ed9d2f25-config\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.581037 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/154a3a8e-2384-4300-88ff-7f04ed9d2f25-client-ca\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.581198 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/154a3a8e-2384-4300-88ff-7f04ed9d2f25-proxy-ca-bundles\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.582100 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/154a3a8e-2384-4300-88ff-7f04ed9d2f25-config\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.585893 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/154a3a8e-2384-4300-88ff-7f04ed9d2f25-serving-cert\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.596286 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckc4t\" (UniqueName: \"kubernetes.io/projected/154a3a8e-2384-4300-88ff-7f04ed9d2f25-kube-api-access-ckc4t\") pod \"controller-manager-694d48b6db-7db5n\" (UID: \"154a3a8e-2384-4300-88ff-7f04ed9d2f25\") " pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:48 crc kubenswrapper[5016]: I1011 07:52:48.688002 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:49 crc kubenswrapper[5016]: I1011 07:52:49.056958 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" event={"ID":"d9e856ca-c719-4e51-8c93-262b602e3fa6","Type":"ContainerStarted","Data":"b66289d8ee1c35a461678b9f65ce39ca636c3ec88fbfe52d173130e7d693092f"} Oct 11 07:52:49 crc kubenswrapper[5016]: I1011 07:52:49.059225 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" event={"ID":"d9e856ca-c719-4e51-8c93-262b602e3fa6","Type":"ContainerStarted","Data":"e4b1976addd4cf314d576770c494eb5c42e60397a7c9d838e2f70afe1b676de0"} Oct 11 07:52:49 crc kubenswrapper[5016]: I1011 07:52:49.079290 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" podStartSLOduration=2.079268835 podStartE2EDuration="2.079268835s" podCreationTimestamp="2025-10-11 07:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:52:49.078164135 +0000 UTC m=+756.978620121" watchObservedRunningTime="2025-10-11 07:52:49.079268835 +0000 UTC m=+756.979724781" Oct 11 07:52:49 crc kubenswrapper[5016]: I1011 07:52:49.107699 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-694d48b6db-7db5n"] Oct 11 07:52:49 crc kubenswrapper[5016]: W1011 07:52:49.112332 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod154a3a8e_2384_4300_88ff_7f04ed9d2f25.slice/crio-bf4a15c7323abd3528f148c3b919fcece1a1d9b175a6cd0b0f3b88fb656eb97f WatchSource:0}: Error finding container bf4a15c7323abd3528f148c3b919fcece1a1d9b175a6cd0b0f3b88fb656eb97f: Status 404 returned error can't find the container with id bf4a15c7323abd3528f148c3b919fcece1a1d9b175a6cd0b0f3b88fb656eb97f Oct 11 07:52:49 crc kubenswrapper[5016]: I1011 07:52:49.145268 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699f1f0c-fc1d-4599-97a8-a135238977b4" path="/var/lib/kubelet/pods/699f1f0c-fc1d-4599-97a8-a135238977b4/volumes" Oct 11 07:52:49 crc kubenswrapper[5016]: I1011 07:52:49.145827 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8424bee6-8168-4c9f-b70e-5523e1990bcd" path="/var/lib/kubelet/pods/8424bee6-8168-4c9f-b70e-5523e1990bcd/volumes" Oct 11 07:52:50 crc kubenswrapper[5016]: I1011 07:52:50.063466 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" event={"ID":"154a3a8e-2384-4300-88ff-7f04ed9d2f25","Type":"ContainerStarted","Data":"fe85ee666cf4a0b41ec3331affce724ebfb2f5e65eb437f65b61ff0a2bdb5d13"} Oct 11 07:52:50 crc kubenswrapper[5016]: I1011 07:52:50.063846 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:50 crc kubenswrapper[5016]: I1011 07:52:50.063863 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" event={"ID":"154a3a8e-2384-4300-88ff-7f04ed9d2f25","Type":"ContainerStarted","Data":"bf4a15c7323abd3528f148c3b919fcece1a1d9b175a6cd0b0f3b88fb656eb97f"} Oct 11 07:52:50 crc kubenswrapper[5016]: I1011 07:52:50.068481 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77fd5c6857-jpwpl" Oct 11 07:52:50 crc kubenswrapper[5016]: I1011 07:52:50.080091 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" podStartSLOduration=4.080073832 podStartE2EDuration="4.080073832s" podCreationTimestamp="2025-10-11 07:52:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:52:50.07700415 +0000 UTC m=+757.977460106" watchObservedRunningTime="2025-10-11 07:52:50.080073832 +0000 UTC m=+757.980529778" Oct 11 07:52:51 crc kubenswrapper[5016]: I1011 07:52:51.070001 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:51 crc kubenswrapper[5016]: I1011 07:52:51.078031 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" Oct 11 07:52:55 crc kubenswrapper[5016]: I1011 07:52:55.001034 5016 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 11 07:52:56 crc kubenswrapper[5016]: I1011 07:52:56.574639 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-l67lz" Oct 11 07:52:56 crc kubenswrapper[5016]: I1011 07:52:56.575018 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-l67lz" Oct 11 07:52:56 crc kubenswrapper[5016]: I1011 07:52:56.614838 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-l67lz" Oct 11 07:52:57 crc kubenswrapper[5016]: I1011 07:52:57.158322 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-l67lz" Oct 11 07:52:57 crc kubenswrapper[5016]: I1011 07:52:57.868843 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jpstd" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.070984 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4"] Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.072687 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.078468 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4h2ts" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.084557 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4"] Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.131275 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.131582 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfj2\" (UniqueName: \"kubernetes.io/projected/b22318de-e3c9-4d58-a758-443f2a6f4c9f-kube-api-access-8nfj2\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.131762 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.233719 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.233776 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfj2\" (UniqueName: \"kubernetes.io/projected/b22318de-e3c9-4d58-a758-443f2a6f4c9f-kube-api-access-8nfj2\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.233852 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.234465 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-bundle\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.236030 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-util\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.261394 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfj2\" (UniqueName: \"kubernetes.io/projected/b22318de-e3c9-4d58-a758-443f2a6f4c9f-kube-api-access-8nfj2\") pod \"bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.398336 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:52:58 crc kubenswrapper[5016]: I1011 07:52:58.901145 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4"] Oct 11 07:52:58 crc kubenswrapper[5016]: W1011 07:52:58.908248 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb22318de_e3c9_4d58_a758_443f2a6f4c9f.slice/crio-331b453483392b698397f578f9ee95289d0463a44c1ec67a8d0db7412d2470b0 WatchSource:0}: Error finding container 331b453483392b698397f578f9ee95289d0463a44c1ec67a8d0db7412d2470b0: Status 404 returned error can't find the container with id 331b453483392b698397f578f9ee95289d0463a44c1ec67a8d0db7412d2470b0 Oct 11 07:52:59 crc kubenswrapper[5016]: I1011 07:52:59.130804 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" event={"ID":"b22318de-e3c9-4d58-a758-443f2a6f4c9f","Type":"ContainerStarted","Data":"b85d80247462e9d39a96b49a0a2ec706c850a2b6ac9f7b85ab2b35a03240627c"} Oct 11 07:52:59 crc kubenswrapper[5016]: I1011 07:52:59.131163 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" event={"ID":"b22318de-e3c9-4d58-a758-443f2a6f4c9f","Type":"ContainerStarted","Data":"331b453483392b698397f578f9ee95289d0463a44c1ec67a8d0db7412d2470b0"} Oct 11 07:53:00 crc kubenswrapper[5016]: I1011 07:53:00.141833 5016 generic.go:334] "Generic (PLEG): container finished" podID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerID="b85d80247462e9d39a96b49a0a2ec706c850a2b6ac9f7b85ab2b35a03240627c" exitCode=0 Oct 11 07:53:00 crc kubenswrapper[5016]: I1011 07:53:00.141922 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" event={"ID":"b22318de-e3c9-4d58-a758-443f2a6f4c9f","Type":"ContainerDied","Data":"b85d80247462e9d39a96b49a0a2ec706c850a2b6ac9f7b85ab2b35a03240627c"} Oct 11 07:53:01 crc kubenswrapper[5016]: I1011 07:53:01.157302 5016 generic.go:334] "Generic (PLEG): container finished" podID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerID="f0155b0f2580183fb1db85835c764aa5d6a017b88d8f88e84b6b175d02016eed" exitCode=0 Oct 11 07:53:01 crc kubenswrapper[5016]: I1011 07:53:01.157412 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" event={"ID":"b22318de-e3c9-4d58-a758-443f2a6f4c9f","Type":"ContainerDied","Data":"f0155b0f2580183fb1db85835c764aa5d6a017b88d8f88e84b6b175d02016eed"} Oct 11 07:53:02 crc kubenswrapper[5016]: I1011 07:53:02.168560 5016 generic.go:334] "Generic (PLEG): container finished" podID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerID="b3a3f3fc48dcb0218338a5f93be45068dbd4ed6dfd95c929ed4b572b1f2184b2" exitCode=0 Oct 11 07:53:02 crc kubenswrapper[5016]: I1011 07:53:02.168707 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" event={"ID":"b22318de-e3c9-4d58-a758-443f2a6f4c9f","Type":"ContainerDied","Data":"b3a3f3fc48dcb0218338a5f93be45068dbd4ed6dfd95c929ed4b572b1f2184b2"} Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.602593 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.722347 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nfj2\" (UniqueName: \"kubernetes.io/projected/b22318de-e3c9-4d58-a758-443f2a6f4c9f-kube-api-access-8nfj2\") pod \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.722495 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-bundle\") pod \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.722628 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-util\") pod \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\" (UID: \"b22318de-e3c9-4d58-a758-443f2a6f4c9f\") " Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.724404 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-bundle" (OuterVolumeSpecName: "bundle") pod "b22318de-e3c9-4d58-a758-443f2a6f4c9f" (UID: "b22318de-e3c9-4d58-a758-443f2a6f4c9f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.730397 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b22318de-e3c9-4d58-a758-443f2a6f4c9f-kube-api-access-8nfj2" (OuterVolumeSpecName: "kube-api-access-8nfj2") pod "b22318de-e3c9-4d58-a758-443f2a6f4c9f" (UID: "b22318de-e3c9-4d58-a758-443f2a6f4c9f"). InnerVolumeSpecName "kube-api-access-8nfj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.752571 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-util" (OuterVolumeSpecName: "util") pod "b22318de-e3c9-4d58-a758-443f2a6f4c9f" (UID: "b22318de-e3c9-4d58-a758-443f2a6f4c9f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.824104 5016 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-util\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.824164 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nfj2\" (UniqueName: \"kubernetes.io/projected/b22318de-e3c9-4d58-a758-443f2a6f4c9f-kube-api-access-8nfj2\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:03 crc kubenswrapper[5016]: I1011 07:53:03.824185 5016 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b22318de-e3c9-4d58-a758-443f2a6f4c9f-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:04 crc kubenswrapper[5016]: I1011 07:53:04.188210 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" event={"ID":"b22318de-e3c9-4d58-a758-443f2a6f4c9f","Type":"ContainerDied","Data":"331b453483392b698397f578f9ee95289d0463a44c1ec67a8d0db7412d2470b0"} Oct 11 07:53:04 crc kubenswrapper[5016]: I1011 07:53:04.188261 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4" Oct 11 07:53:04 crc kubenswrapper[5016]: I1011 07:53:04.188269 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="331b453483392b698397f578f9ee95289d0463a44c1ec67a8d0db7412d2470b0" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.158219 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tjbfc"] Oct 11 07:53:07 crc kubenswrapper[5016]: E1011 07:53:07.159172 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerName="pull" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.159208 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerName="pull" Oct 11 07:53:07 crc kubenswrapper[5016]: E1011 07:53:07.159239 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerName="util" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.159255 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerName="util" Oct 11 07:53:07 crc kubenswrapper[5016]: E1011 07:53:07.159294 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerName="extract" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.159313 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerName="extract" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.159585 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="b22318de-e3c9-4d58-a758-443f2a6f4c9f" containerName="extract" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.161423 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.167365 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-utilities\") pod \"redhat-operators-tjbfc\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.167673 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-catalog-content\") pod \"redhat-operators-tjbfc\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.168241 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrbs\" (UniqueName: \"kubernetes.io/projected/e91b66d4-3420-498b-915f-028eb85d3afa-kube-api-access-thrbs\") pod \"redhat-operators-tjbfc\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.177480 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tjbfc"] Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.270409 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrbs\" (UniqueName: \"kubernetes.io/projected/e91b66d4-3420-498b-915f-028eb85d3afa-kube-api-access-thrbs\") pod \"redhat-operators-tjbfc\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.270516 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-utilities\") pod \"redhat-operators-tjbfc\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.270578 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-catalog-content\") pod \"redhat-operators-tjbfc\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.271885 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-utilities\") pod \"redhat-operators-tjbfc\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.272089 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-catalog-content\") pod \"redhat-operators-tjbfc\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.302927 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrbs\" (UniqueName: \"kubernetes.io/projected/e91b66d4-3420-498b-915f-028eb85d3afa-kube-api-access-thrbs\") pod \"redhat-operators-tjbfc\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.509254 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:07 crc kubenswrapper[5016]: I1011 07:53:07.925103 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tjbfc"] Oct 11 07:53:08 crc kubenswrapper[5016]: I1011 07:53:08.220982 5016 generic.go:334] "Generic (PLEG): container finished" podID="e91b66d4-3420-498b-915f-028eb85d3afa" containerID="a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10" exitCode=0 Oct 11 07:53:08 crc kubenswrapper[5016]: I1011 07:53:08.221078 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbfc" event={"ID":"e91b66d4-3420-498b-915f-028eb85d3afa","Type":"ContainerDied","Data":"a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10"} Oct 11 07:53:08 crc kubenswrapper[5016]: I1011 07:53:08.221349 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbfc" event={"ID":"e91b66d4-3420-498b-915f-028eb85d3afa","Type":"ContainerStarted","Data":"f08e9b3aeee3055db9dc29f3781d062b8d283319ac404b886244520ac460e8f5"} Oct 11 07:53:09 crc kubenswrapper[5016]: I1011 07:53:09.228737 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbfc" event={"ID":"e91b66d4-3420-498b-915f-028eb85d3afa","Type":"ContainerStarted","Data":"51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b"} Oct 11 07:53:09 crc kubenswrapper[5016]: I1011 07:53:09.449089 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql"] Oct 11 07:53:09 crc kubenswrapper[5016]: I1011 07:53:09.449963 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" Oct 11 07:53:09 crc kubenswrapper[5016]: I1011 07:53:09.452077 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-wqx8h" Oct 11 07:53:09 crc kubenswrapper[5016]: I1011 07:53:09.491567 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql"] Oct 11 07:53:09 crc kubenswrapper[5016]: I1011 07:53:09.511070 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt26n\" (UniqueName: \"kubernetes.io/projected/35cdb9c9-3cf9-4025-b95e-7d62879eb20a-kube-api-access-jt26n\") pod \"openstack-operator-controller-operator-688d597459-gk6ql\" (UID: \"35cdb9c9-3cf9-4025-b95e-7d62879eb20a\") " pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" Oct 11 07:53:09 crc kubenswrapper[5016]: I1011 07:53:09.612126 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt26n\" (UniqueName: \"kubernetes.io/projected/35cdb9c9-3cf9-4025-b95e-7d62879eb20a-kube-api-access-jt26n\") pod \"openstack-operator-controller-operator-688d597459-gk6ql\" (UID: \"35cdb9c9-3cf9-4025-b95e-7d62879eb20a\") " pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" Oct 11 07:53:09 crc kubenswrapper[5016]: I1011 07:53:09.630445 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt26n\" (UniqueName: \"kubernetes.io/projected/35cdb9c9-3cf9-4025-b95e-7d62879eb20a-kube-api-access-jt26n\") pod \"openstack-operator-controller-operator-688d597459-gk6ql\" (UID: \"35cdb9c9-3cf9-4025-b95e-7d62879eb20a\") " pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" Oct 11 07:53:09 crc kubenswrapper[5016]: I1011 07:53:09.777115 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" Oct 11 07:53:10 crc kubenswrapper[5016]: I1011 07:53:10.237549 5016 generic.go:334] "Generic (PLEG): container finished" podID="e91b66d4-3420-498b-915f-028eb85d3afa" containerID="51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b" exitCode=0 Oct 11 07:53:10 crc kubenswrapper[5016]: I1011 07:53:10.237698 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbfc" event={"ID":"e91b66d4-3420-498b-915f-028eb85d3afa","Type":"ContainerDied","Data":"51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b"} Oct 11 07:53:10 crc kubenswrapper[5016]: I1011 07:53:10.269592 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql"] Oct 11 07:53:10 crc kubenswrapper[5016]: W1011 07:53:10.274218 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35cdb9c9_3cf9_4025_b95e_7d62879eb20a.slice/crio-e87ba0f03b9aa68b7045bc5b27987895ec9c45f962e7c2684efe1f3eed474031 WatchSource:0}: Error finding container e87ba0f03b9aa68b7045bc5b27987895ec9c45f962e7c2684efe1f3eed474031: Status 404 returned error can't find the container with id e87ba0f03b9aa68b7045bc5b27987895ec9c45f962e7c2684efe1f3eed474031 Oct 11 07:53:11 crc kubenswrapper[5016]: I1011 07:53:11.248848 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" event={"ID":"35cdb9c9-3cf9-4025-b95e-7d62879eb20a","Type":"ContainerStarted","Data":"e87ba0f03b9aa68b7045bc5b27987895ec9c45f962e7c2684efe1f3eed474031"} Oct 11 07:53:11 crc kubenswrapper[5016]: I1011 07:53:11.251116 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbfc" event={"ID":"e91b66d4-3420-498b-915f-028eb85d3afa","Type":"ContainerStarted","Data":"7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f"} Oct 11 07:53:11 crc kubenswrapper[5016]: I1011 07:53:11.286387 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tjbfc" podStartSLOduration=1.5687041750000001 podStartE2EDuration="4.286368254s" podCreationTimestamp="2025-10-11 07:53:07 +0000 UTC" firstStartedPulling="2025-10-11 07:53:08.223119679 +0000 UTC m=+776.123575625" lastFinishedPulling="2025-10-11 07:53:10.940783748 +0000 UTC m=+778.841239704" observedRunningTime="2025-10-11 07:53:11.285594126 +0000 UTC m=+779.186050072" watchObservedRunningTime="2025-10-11 07:53:11.286368254 +0000 UTC m=+779.186824200" Oct 11 07:53:15 crc kubenswrapper[5016]: I1011 07:53:15.278923 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" event={"ID":"35cdb9c9-3cf9-4025-b95e-7d62879eb20a","Type":"ContainerStarted","Data":"1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831"} Oct 11 07:53:17 crc kubenswrapper[5016]: I1011 07:53:17.510113 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:17 crc kubenswrapper[5016]: I1011 07:53:17.510447 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:17 crc kubenswrapper[5016]: I1011 07:53:17.549580 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:18 crc kubenswrapper[5016]: I1011 07:53:18.309448 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" event={"ID":"35cdb9c9-3cf9-4025-b95e-7d62879eb20a","Type":"ContainerStarted","Data":"80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce"} Oct 11 07:53:18 crc kubenswrapper[5016]: I1011 07:53:18.342727 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" podStartSLOduration=2.05705175 podStartE2EDuration="9.342708638s" podCreationTimestamp="2025-10-11 07:53:09 +0000 UTC" firstStartedPulling="2025-10-11 07:53:10.280749559 +0000 UTC m=+778.181205505" lastFinishedPulling="2025-10-11 07:53:17.566406447 +0000 UTC m=+785.466862393" observedRunningTime="2025-10-11 07:53:18.339197822 +0000 UTC m=+786.239653808" watchObservedRunningTime="2025-10-11 07:53:18.342708638 +0000 UTC m=+786.243164594" Oct 11 07:53:18 crc kubenswrapper[5016]: I1011 07:53:18.359026 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:19 crc kubenswrapper[5016]: I1011 07:53:19.314487 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" Oct 11 07:53:19 crc kubenswrapper[5016]: I1011 07:53:19.317282 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" Oct 11 07:53:19 crc kubenswrapper[5016]: I1011 07:53:19.515022 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tjbfc"] Oct 11 07:53:20 crc kubenswrapper[5016]: I1011 07:53:20.319211 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tjbfc" podUID="e91b66d4-3420-498b-915f-028eb85d3afa" containerName="registry-server" containerID="cri-o://7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f" gracePeriod=2 Oct 11 07:53:20 crc kubenswrapper[5016]: I1011 07:53:20.955235 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.073088 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-catalog-content\") pod \"e91b66d4-3420-498b-915f-028eb85d3afa\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.073158 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thrbs\" (UniqueName: \"kubernetes.io/projected/e91b66d4-3420-498b-915f-028eb85d3afa-kube-api-access-thrbs\") pod \"e91b66d4-3420-498b-915f-028eb85d3afa\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.073238 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-utilities\") pod \"e91b66d4-3420-498b-915f-028eb85d3afa\" (UID: \"e91b66d4-3420-498b-915f-028eb85d3afa\") " Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.074096 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-utilities" (OuterVolumeSpecName: "utilities") pod "e91b66d4-3420-498b-915f-028eb85d3afa" (UID: "e91b66d4-3420-498b-915f-028eb85d3afa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.084139 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91b66d4-3420-498b-915f-028eb85d3afa-kube-api-access-thrbs" (OuterVolumeSpecName: "kube-api-access-thrbs") pod "e91b66d4-3420-498b-915f-028eb85d3afa" (UID: "e91b66d4-3420-498b-915f-028eb85d3afa"). InnerVolumeSpecName "kube-api-access-thrbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.175396 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thrbs\" (UniqueName: \"kubernetes.io/projected/e91b66d4-3420-498b-915f-028eb85d3afa-kube-api-access-thrbs\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.175428 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.181474 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e91b66d4-3420-498b-915f-028eb85d3afa" (UID: "e91b66d4-3420-498b-915f-028eb85d3afa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.277131 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e91b66d4-3420-498b-915f-028eb85d3afa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.329487 5016 generic.go:334] "Generic (PLEG): container finished" podID="e91b66d4-3420-498b-915f-028eb85d3afa" containerID="7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f" exitCode=0 Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.329577 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbfc" event={"ID":"e91b66d4-3420-498b-915f-028eb85d3afa","Type":"ContainerDied","Data":"7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f"} Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.329597 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tjbfc" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.329641 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tjbfc" event={"ID":"e91b66d4-3420-498b-915f-028eb85d3afa","Type":"ContainerDied","Data":"f08e9b3aeee3055db9dc29f3781d062b8d283319ac404b886244520ac460e8f5"} Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.329689 5016 scope.go:117] "RemoveContainer" containerID="7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.353363 5016 scope.go:117] "RemoveContainer" containerID="51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.366490 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tjbfc"] Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.370035 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tjbfc"] Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.395258 5016 scope.go:117] "RemoveContainer" containerID="a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.419184 5016 scope.go:117] "RemoveContainer" containerID="7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f" Oct 11 07:53:21 crc kubenswrapper[5016]: E1011 07:53:21.419752 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f\": container with ID starting with 7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f not found: ID does not exist" containerID="7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.419799 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f"} err="failed to get container status \"7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f\": rpc error: code = NotFound desc = could not find container \"7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f\": container with ID starting with 7eca5781186388c53b62b18257ff77d3feee72240b5f7d4a96cef7fb7723489f not found: ID does not exist" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.419831 5016 scope.go:117] "RemoveContainer" containerID="51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b" Oct 11 07:53:21 crc kubenswrapper[5016]: E1011 07:53:21.420152 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b\": container with ID starting with 51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b not found: ID does not exist" containerID="51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.420188 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b"} err="failed to get container status \"51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b\": rpc error: code = NotFound desc = could not find container \"51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b\": container with ID starting with 51372acb9a37fc309106c08e1a596948c2acc71a727014b00f69997ad0ef896b not found: ID does not exist" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.420216 5016 scope.go:117] "RemoveContainer" containerID="a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10" Oct 11 07:53:21 crc kubenswrapper[5016]: E1011 07:53:21.420727 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10\": container with ID starting with a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10 not found: ID does not exist" containerID="a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10" Oct 11 07:53:21 crc kubenswrapper[5016]: I1011 07:53:21.420761 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10"} err="failed to get container status \"a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10\": rpc error: code = NotFound desc = could not find container \"a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10\": container with ID starting with a76190263a8a1d653b8fc80718a50c1b5d73ef65cc9e2a7763ab1206f8a26d10 not found: ID does not exist" Oct 11 07:53:23 crc kubenswrapper[5016]: I1011 07:53:23.160556 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91b66d4-3420-498b-915f-028eb85d3afa" path="/var/lib/kubelet/pods/e91b66d4-3420-498b-915f-028eb85d3afa/volumes" Oct 11 07:53:23 crc kubenswrapper[5016]: I1011 07:53:23.926285 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9j9jm"] Oct 11 07:53:23 crc kubenswrapper[5016]: E1011 07:53:23.926569 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91b66d4-3420-498b-915f-028eb85d3afa" containerName="registry-server" Oct 11 07:53:23 crc kubenswrapper[5016]: I1011 07:53:23.926590 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91b66d4-3420-498b-915f-028eb85d3afa" containerName="registry-server" Oct 11 07:53:23 crc kubenswrapper[5016]: E1011 07:53:23.926607 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91b66d4-3420-498b-915f-028eb85d3afa" containerName="extract-utilities" Oct 11 07:53:23 crc kubenswrapper[5016]: I1011 07:53:23.926614 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91b66d4-3420-498b-915f-028eb85d3afa" containerName="extract-utilities" Oct 11 07:53:23 crc kubenswrapper[5016]: E1011 07:53:23.926639 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91b66d4-3420-498b-915f-028eb85d3afa" containerName="extract-content" Oct 11 07:53:23 crc kubenswrapper[5016]: I1011 07:53:23.926646 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91b66d4-3420-498b-915f-028eb85d3afa" containerName="extract-content" Oct 11 07:53:23 crc kubenswrapper[5016]: I1011 07:53:23.926807 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91b66d4-3420-498b-915f-028eb85d3afa" containerName="registry-server" Oct 11 07:53:23 crc kubenswrapper[5016]: I1011 07:53:23.927813 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:23 crc kubenswrapper[5016]: I1011 07:53:23.951385 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9j9jm"] Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.114705 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll2hz\" (UniqueName: \"kubernetes.io/projected/bf574e0c-536b-468f-a927-08fdab5f36ff-kube-api-access-ll2hz\") pod \"community-operators-9j9jm\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.114746 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-catalog-content\") pod \"community-operators-9j9jm\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.114868 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-utilities\") pod \"community-operators-9j9jm\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.216857 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-catalog-content\") pod \"community-operators-9j9jm\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.217527 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll2hz\" (UniqueName: \"kubernetes.io/projected/bf574e0c-536b-468f-a927-08fdab5f36ff-kube-api-access-ll2hz\") pod \"community-operators-9j9jm\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.218036 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-utilities\") pod \"community-operators-9j9jm\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.217439 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-catalog-content\") pod \"community-operators-9j9jm\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.218507 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-utilities\") pod \"community-operators-9j9jm\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.255402 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll2hz\" (UniqueName: \"kubernetes.io/projected/bf574e0c-536b-468f-a927-08fdab5f36ff-kube-api-access-ll2hz\") pod \"community-operators-9j9jm\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:24 crc kubenswrapper[5016]: I1011 07:53:24.545969 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:25 crc kubenswrapper[5016]: I1011 07:53:25.057139 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9j9jm"] Oct 11 07:53:25 crc kubenswrapper[5016]: I1011 07:53:25.352905 5016 generic.go:334] "Generic (PLEG): container finished" podID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerID="11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3" exitCode=0 Oct 11 07:53:25 crc kubenswrapper[5016]: I1011 07:53:25.353008 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9j9jm" event={"ID":"bf574e0c-536b-468f-a927-08fdab5f36ff","Type":"ContainerDied","Data":"11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3"} Oct 11 07:53:25 crc kubenswrapper[5016]: I1011 07:53:25.354217 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9j9jm" event={"ID":"bf574e0c-536b-468f-a927-08fdab5f36ff","Type":"ContainerStarted","Data":"526590aede1ce43ee201c4647e5fd9fcab717e3427cc46d2e9abb8bdc72d70a3"} Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.361178 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9j9jm" event={"ID":"bf574e0c-536b-468f-a927-08fdab5f36ff","Type":"ContainerStarted","Data":"46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7"} Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.726007 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-scgsv"] Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.727720 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.741247 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-scgsv"] Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.748485 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-utilities\") pod \"certified-operators-scgsv\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.748538 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqkpc\" (UniqueName: \"kubernetes.io/projected/1861edc2-2e2d-4ff1-b991-48723e355c31-kube-api-access-fqkpc\") pod \"certified-operators-scgsv\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.748579 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-catalog-content\") pod \"certified-operators-scgsv\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.849353 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-catalog-content\") pod \"certified-operators-scgsv\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.849485 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-utilities\") pod \"certified-operators-scgsv\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.849520 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqkpc\" (UniqueName: \"kubernetes.io/projected/1861edc2-2e2d-4ff1-b991-48723e355c31-kube-api-access-fqkpc\") pod \"certified-operators-scgsv\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.850089 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-utilities\") pod \"certified-operators-scgsv\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.850772 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-catalog-content\") pod \"certified-operators-scgsv\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:26 crc kubenswrapper[5016]: I1011 07:53:26.870518 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqkpc\" (UniqueName: \"kubernetes.io/projected/1861edc2-2e2d-4ff1-b991-48723e355c31-kube-api-access-fqkpc\") pod \"certified-operators-scgsv\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:27 crc kubenswrapper[5016]: I1011 07:53:27.061183 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:27 crc kubenswrapper[5016]: I1011 07:53:27.394861 5016 generic.go:334] "Generic (PLEG): container finished" podID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerID="46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7" exitCode=0 Oct 11 07:53:27 crc kubenswrapper[5016]: I1011 07:53:27.394909 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9j9jm" event={"ID":"bf574e0c-536b-468f-a927-08fdab5f36ff","Type":"ContainerDied","Data":"46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7"} Oct 11 07:53:27 crc kubenswrapper[5016]: I1011 07:53:27.434349 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-scgsv"] Oct 11 07:53:28 crc kubenswrapper[5016]: I1011 07:53:28.402594 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9j9jm" event={"ID":"bf574e0c-536b-468f-a927-08fdab5f36ff","Type":"ContainerStarted","Data":"23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e"} Oct 11 07:53:28 crc kubenswrapper[5016]: I1011 07:53:28.404039 5016 generic.go:334] "Generic (PLEG): container finished" podID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerID="2ddb25a1f89801ca42dfd44bf7f1ce8e454bf7a6bf9e14ae9330c0fa0fdbc320" exitCode=0 Oct 11 07:53:28 crc kubenswrapper[5016]: I1011 07:53:28.404068 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scgsv" event={"ID":"1861edc2-2e2d-4ff1-b991-48723e355c31","Type":"ContainerDied","Data":"2ddb25a1f89801ca42dfd44bf7f1ce8e454bf7a6bf9e14ae9330c0fa0fdbc320"} Oct 11 07:53:28 crc kubenswrapper[5016]: I1011 07:53:28.404085 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scgsv" event={"ID":"1861edc2-2e2d-4ff1-b991-48723e355c31","Type":"ContainerStarted","Data":"18b3763d90a410b78551a8147eaabb8471119b02dec312e124bfd589090aa25b"} Oct 11 07:53:28 crc kubenswrapper[5016]: I1011 07:53:28.421790 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9j9jm" podStartSLOduration=2.995129843 podStartE2EDuration="5.421770697s" podCreationTimestamp="2025-10-11 07:53:23 +0000 UTC" firstStartedPulling="2025-10-11 07:53:25.354268558 +0000 UTC m=+793.254724504" lastFinishedPulling="2025-10-11 07:53:27.780909412 +0000 UTC m=+795.681365358" observedRunningTime="2025-10-11 07:53:28.420784622 +0000 UTC m=+796.321240568" watchObservedRunningTime="2025-10-11 07:53:28.421770697 +0000 UTC m=+796.322226643" Oct 11 07:53:29 crc kubenswrapper[5016]: I1011 07:53:29.410140 5016 generic.go:334] "Generic (PLEG): container finished" podID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerID="b50c2258777b8bf20a2c26b725953d2648a1d0a7d0a13cacaacfe6a31b3ed247" exitCode=0 Oct 11 07:53:29 crc kubenswrapper[5016]: I1011 07:53:29.410224 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scgsv" event={"ID":"1861edc2-2e2d-4ff1-b991-48723e355c31","Type":"ContainerDied","Data":"b50c2258777b8bf20a2c26b725953d2648a1d0a7d0a13cacaacfe6a31b3ed247"} Oct 11 07:53:30 crc kubenswrapper[5016]: I1011 07:53:30.419292 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scgsv" event={"ID":"1861edc2-2e2d-4ff1-b991-48723e355c31","Type":"ContainerStarted","Data":"48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024"} Oct 11 07:53:30 crc kubenswrapper[5016]: I1011 07:53:30.477624 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-scgsv" podStartSLOduration=2.921036741 podStartE2EDuration="4.477605132s" podCreationTimestamp="2025-10-11 07:53:26 +0000 UTC" firstStartedPulling="2025-10-11 07:53:28.405881265 +0000 UTC m=+796.306337211" lastFinishedPulling="2025-10-11 07:53:29.962449636 +0000 UTC m=+797.862905602" observedRunningTime="2025-10-11 07:53:30.461720441 +0000 UTC m=+798.362176387" watchObservedRunningTime="2025-10-11 07:53:30.477605132 +0000 UTC m=+798.378061078" Oct 11 07:53:34 crc kubenswrapper[5016]: I1011 07:53:34.550405 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:34 crc kubenswrapper[5016]: I1011 07:53:34.550935 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:34 crc kubenswrapper[5016]: I1011 07:53:34.585974 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.487544 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.540928 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9j9jm"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.773141 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.774219 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.785426 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.791991 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.793353 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-5cdz2" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.798578 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-l7mk9" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.816939 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.826003 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.827132 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.829671 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ffkxw" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.829874 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.831135 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.832225 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.841415 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-5td2c" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.849952 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.854760 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.877061 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.878517 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.883067 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-gtjct" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.889992 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.890739 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d9wl\" (UniqueName: \"kubernetes.io/projected/642e4a4e-69f3-4bb7-aa0d-55bb7809203a-kube-api-access-7d9wl\") pod \"cinder-operator-controller-manager-7b7fb68549-g5rms\" (UID: \"642e4a4e-69f3-4bb7-aa0d-55bb7809203a\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.890783 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnq7r\" (UniqueName: \"kubernetes.io/projected/ce574485-559e-47ce-82d5-df9228ee47e9-kube-api-access-cnq7r\") pod \"barbican-operator-controller-manager-658bdf4b74-5k87v\" (UID: \"ce574485-559e-47ce-82d5-df9228ee47e9\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.891003 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.896077 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-5rtqn" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.897223 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.911031 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.934765 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.936236 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.940510 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.941437 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.941522 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7qs89" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.942794 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.945527 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.947006 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.951621 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-vz2ts" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.953966 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5vsgp" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.967387 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.978339 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.979373 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.980854 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nz4s7" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.993802 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96"] Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.996422 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flk7w\" (UniqueName: \"kubernetes.io/projected/683143f6-ebe0-47fb-b6c3-96680e673ff7-kube-api-access-flk7w\") pod \"glance-operator-controller-manager-84b9b84486-jvtkl\" (UID: \"683143f6-ebe0-47fb-b6c3-96680e673ff7\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.996497 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sfxl\" (UniqueName: \"kubernetes.io/projected/53845a5f-9403-4fc4-80b0-56a724bf5405-kube-api-access-6sfxl\") pod \"heat-operator-controller-manager-858f76bbdd-zbqbd\" (UID: \"53845a5f-9403-4fc4-80b0-56a724bf5405\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.996526 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d9wl\" (UniqueName: \"kubernetes.io/projected/642e4a4e-69f3-4bb7-aa0d-55bb7809203a-kube-api-access-7d9wl\") pod \"cinder-operator-controller-manager-7b7fb68549-g5rms\" (UID: \"642e4a4e-69f3-4bb7-aa0d-55bb7809203a\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.996556 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9rnb\" (UniqueName: \"kubernetes.io/projected/6347c5af-b7ed-4498-be85-e9a818a0e0d4-kube-api-access-p9rnb\") pod \"designate-operator-controller-manager-85d5d9dd78-8cjvz\" (UID: \"6347c5af-b7ed-4498-be85-e9a818a0e0d4\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.996574 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnq7r\" (UniqueName: \"kubernetes.io/projected/ce574485-559e-47ce-82d5-df9228ee47e9-kube-api-access-cnq7r\") pod \"barbican-operator-controller-manager-658bdf4b74-5k87v\" (UID: \"ce574485-559e-47ce-82d5-df9228ee47e9\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.996597 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdd8l\" (UniqueName: \"kubernetes.io/projected/1877ae13-d74b-4a7c-9f26-10757d256474-kube-api-access-sdd8l\") pod \"horizon-operator-controller-manager-7ffbcb7588-hdcq6\" (UID: \"1877ae13-d74b-4a7c-9f26-10757d256474\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" Oct 11 07:53:35 crc kubenswrapper[5016]: I1011 07:53:35.999045 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.028523 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnq7r\" (UniqueName: \"kubernetes.io/projected/ce574485-559e-47ce-82d5-df9228ee47e9-kube-api-access-cnq7r\") pod \"barbican-operator-controller-manager-658bdf4b74-5k87v\" (UID: \"ce574485-559e-47ce-82d5-df9228ee47e9\") " pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.029671 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.032971 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d9wl\" (UniqueName: \"kubernetes.io/projected/642e4a4e-69f3-4bb7-aa0d-55bb7809203a-kube-api-access-7d9wl\") pod \"cinder-operator-controller-manager-7b7fb68549-g5rms\" (UID: \"642e4a4e-69f3-4bb7-aa0d-55bb7809203a\") " pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.038781 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.040061 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.059322 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.059387 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-5tggq" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.060323 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.061724 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.064108 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-ftwk9" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.077355 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.078565 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.086080 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-lw7wx" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.094150 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.097330 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sfxl\" (UniqueName: \"kubernetes.io/projected/53845a5f-9403-4fc4-80b0-56a724bf5405-kube-api-access-6sfxl\") pod \"heat-operator-controller-manager-858f76bbdd-zbqbd\" (UID: \"53845a5f-9403-4fc4-80b0-56a724bf5405\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.097497 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clxw5\" (UniqueName: \"kubernetes.io/projected/e2954d6f-57ba-49c4-ac53-7aa4600cf1b2-kube-api-access-clxw5\") pod \"keystone-operator-controller-manager-55b6b7c7b8-lcm96\" (UID: \"e2954d6f-57ba-49c4-ac53-7aa4600cf1b2\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.097579 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9rnb\" (UniqueName: \"kubernetes.io/projected/6347c5af-b7ed-4498-be85-e9a818a0e0d4-kube-api-access-p9rnb\") pod \"designate-operator-controller-manager-85d5d9dd78-8cjvz\" (UID: \"6347c5af-b7ed-4498-be85-e9a818a0e0d4\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.097704 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82vzq\" (UniqueName: \"kubernetes.io/projected/f36d2ba0-eaa2-48d4-8367-3b718a86b54a-kube-api-access-82vzq\") pod \"infra-operator-controller-manager-656bcbd775-q4rcq\" (UID: \"f36d2ba0-eaa2-48d4-8367-3b718a86b54a\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.097795 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdd8l\" (UniqueName: \"kubernetes.io/projected/1877ae13-d74b-4a7c-9f26-10757d256474-kube-api-access-sdd8l\") pod \"horizon-operator-controller-manager-7ffbcb7588-hdcq6\" (UID: \"1877ae13-d74b-4a7c-9f26-10757d256474\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.097879 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f36d2ba0-eaa2-48d4-8367-3b718a86b54a-cert\") pod \"infra-operator-controller-manager-656bcbd775-q4rcq\" (UID: \"f36d2ba0-eaa2-48d4-8367-3b718a86b54a\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.097957 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flk7w\" (UniqueName: \"kubernetes.io/projected/683143f6-ebe0-47fb-b6c3-96680e673ff7-kube-api-access-flk7w\") pod \"glance-operator-controller-manager-84b9b84486-jvtkl\" (UID: \"683143f6-ebe0-47fb-b6c3-96680e673ff7\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.098036 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knm6s\" (UniqueName: \"kubernetes.io/projected/28386d6d-81d2-4b50-8e61-f82bffe1cec5-kube-api-access-knm6s\") pod \"manila-operator-controller-manager-5f67fbc655-4t8kd\" (UID: \"28386d6d-81d2-4b50-8e61-f82bffe1cec5\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.098163 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4khrn\" (UniqueName: \"kubernetes.io/projected/b53501aa-b72c-457d-ad20-1f57abd81645-kube-api-access-4khrn\") pod \"ironic-operator-controller-manager-9c5c78d49-d5p22\" (UID: \"b53501aa-b72c-457d-ad20-1f57abd81645\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.104870 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.108718 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.120697 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.146584 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9rnb\" (UniqueName: \"kubernetes.io/projected/6347c5af-b7ed-4498-be85-e9a818a0e0d4-kube-api-access-p9rnb\") pod \"designate-operator-controller-manager-85d5d9dd78-8cjvz\" (UID: \"6347c5af-b7ed-4498-be85-e9a818a0e0d4\") " pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.150546 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sfxl\" (UniqueName: \"kubernetes.io/projected/53845a5f-9403-4fc4-80b0-56a724bf5405-kube-api-access-6sfxl\") pod \"heat-operator-controller-manager-858f76bbdd-zbqbd\" (UID: \"53845a5f-9403-4fc4-80b0-56a724bf5405\") " pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.153287 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.154992 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.156170 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.160729 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-6699s" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.183702 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flk7w\" (UniqueName: \"kubernetes.io/projected/683143f6-ebe0-47fb-b6c3-96680e673ff7-kube-api-access-flk7w\") pod \"glance-operator-controller-manager-84b9b84486-jvtkl\" (UID: \"683143f6-ebe0-47fb-b6c3-96680e673ff7\") " pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.184814 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdd8l\" (UniqueName: \"kubernetes.io/projected/1877ae13-d74b-4a7c-9f26-10757d256474-kube-api-access-sdd8l\") pod \"horizon-operator-controller-manager-7ffbcb7588-hdcq6\" (UID: \"1877ae13-d74b-4a7c-9f26-10757d256474\") " pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.201017 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8ql9\" (UniqueName: \"kubernetes.io/projected/81fc8139-b3e6-4aa4-a2a3-3488428fdd67-kube-api-access-n8ql9\") pod \"nova-operator-controller-manager-5df598886f-2cwzg\" (UID: \"81fc8139-b3e6-4aa4-a2a3-3488428fdd67\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.201087 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clxw5\" (UniqueName: \"kubernetes.io/projected/e2954d6f-57ba-49c4-ac53-7aa4600cf1b2-kube-api-access-clxw5\") pod \"keystone-operator-controller-manager-55b6b7c7b8-lcm96\" (UID: \"e2954d6f-57ba-49c4-ac53-7aa4600cf1b2\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.201118 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggqqm\" (UniqueName: \"kubernetes.io/projected/c146e268-2093-47e5-aaa1-824de389d97a-kube-api-access-ggqqm\") pod \"mariadb-operator-controller-manager-f9fb45f8f-txvhv\" (UID: \"c146e268-2093-47e5-aaa1-824de389d97a\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.201155 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82vzq\" (UniqueName: \"kubernetes.io/projected/f36d2ba0-eaa2-48d4-8367-3b718a86b54a-kube-api-access-82vzq\") pod \"infra-operator-controller-manager-656bcbd775-q4rcq\" (UID: \"f36d2ba0-eaa2-48d4-8367-3b718a86b54a\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.201196 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f36d2ba0-eaa2-48d4-8367-3b718a86b54a-cert\") pod \"infra-operator-controller-manager-656bcbd775-q4rcq\" (UID: \"f36d2ba0-eaa2-48d4-8367-3b718a86b54a\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.201235 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knm6s\" (UniqueName: \"kubernetes.io/projected/28386d6d-81d2-4b50-8e61-f82bffe1cec5-kube-api-access-knm6s\") pod \"manila-operator-controller-manager-5f67fbc655-4t8kd\" (UID: \"28386d6d-81d2-4b50-8e61-f82bffe1cec5\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.201260 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4khrn\" (UniqueName: \"kubernetes.io/projected/b53501aa-b72c-457d-ad20-1f57abd81645-kube-api-access-4khrn\") pod \"ironic-operator-controller-manager-9c5c78d49-d5p22\" (UID: \"b53501aa-b72c-457d-ad20-1f57abd81645\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.201289 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cnnv\" (UniqueName: \"kubernetes.io/projected/3da7b7ce-1358-4f29-851c-a1a95f1d5a6f-kube-api-access-8cnnv\") pod \"neutron-operator-controller-manager-79d585cb66-27vlc\" (UID: \"3da7b7ce-1358-4f29-851c-a1a95f1d5a6f\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.209035 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f36d2ba0-eaa2-48d4-8367-3b718a86b54a-cert\") pod \"infra-operator-controller-manager-656bcbd775-q4rcq\" (UID: \"f36d2ba0-eaa2-48d4-8367-3b718a86b54a\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.209376 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.226917 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.229556 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.239265 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.252610 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.254485 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.257609 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.257849 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-rn6mn" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.268024 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-65zs2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.291631 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clxw5\" (UniqueName: \"kubernetes.io/projected/e2954d6f-57ba-49c4-ac53-7aa4600cf1b2-kube-api-access-clxw5\") pod \"keystone-operator-controller-manager-55b6b7c7b8-lcm96\" (UID: \"e2954d6f-57ba-49c4-ac53-7aa4600cf1b2\") " pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.292244 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knm6s\" (UniqueName: \"kubernetes.io/projected/28386d6d-81d2-4b50-8e61-f82bffe1cec5-kube-api-access-knm6s\") pod \"manila-operator-controller-manager-5f67fbc655-4t8kd\" (UID: \"28386d6d-81d2-4b50-8e61-f82bffe1cec5\") " pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.292358 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4khrn\" (UniqueName: \"kubernetes.io/projected/b53501aa-b72c-457d-ad20-1f57abd81645-kube-api-access-4khrn\") pod \"ironic-operator-controller-manager-9c5c78d49-d5p22\" (UID: \"b53501aa-b72c-457d-ad20-1f57abd81645\") " pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.296093 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82vzq\" (UniqueName: \"kubernetes.io/projected/f36d2ba0-eaa2-48d4-8367-3b718a86b54a-kube-api-access-82vzq\") pod \"infra-operator-controller-manager-656bcbd775-q4rcq\" (UID: \"f36d2ba0-eaa2-48d4-8367-3b718a86b54a\") " pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.351493 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.352332 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cnnv\" (UniqueName: \"kubernetes.io/projected/3da7b7ce-1358-4f29-851c-a1a95f1d5a6f-kube-api-access-8cnnv\") pod \"neutron-operator-controller-manager-79d585cb66-27vlc\" (UID: \"3da7b7ce-1358-4f29-851c-a1a95f1d5a6f\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.352358 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzcvp\" (UniqueName: \"kubernetes.io/projected/a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411-kube-api-access-fzcvp\") pod \"octavia-operator-controller-manager-69fdcfc5f5-2mll5\" (UID: \"a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.352440 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8ql9\" (UniqueName: \"kubernetes.io/projected/81fc8139-b3e6-4aa4-a2a3-3488428fdd67-kube-api-access-n8ql9\") pod \"nova-operator-controller-manager-5df598886f-2cwzg\" (UID: \"81fc8139-b3e6-4aa4-a2a3-3488428fdd67\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.352465 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggqqm\" (UniqueName: \"kubernetes.io/projected/c146e268-2093-47e5-aaa1-824de389d97a-kube-api-access-ggqqm\") pod \"mariadb-operator-controller-manager-f9fb45f8f-txvhv\" (UID: \"c146e268-2093-47e5-aaa1-824de389d97a\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.357989 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.358085 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.359682 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.360773 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.365614 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-kkw8j" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.373462 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.393793 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.395016 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.401551 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-lpbzn" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.404175 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggqqm\" (UniqueName: \"kubernetes.io/projected/c146e268-2093-47e5-aaa1-824de389d97a-kube-api-access-ggqqm\") pod \"mariadb-operator-controller-manager-f9fb45f8f-txvhv\" (UID: \"c146e268-2093-47e5-aaa1-824de389d97a\") " pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.405974 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8ql9\" (UniqueName: \"kubernetes.io/projected/81fc8139-b3e6-4aa4-a2a3-3488428fdd67-kube-api-access-n8ql9\") pod \"nova-operator-controller-manager-5df598886f-2cwzg\" (UID: \"81fc8139-b3e6-4aa4-a2a3-3488428fdd67\") " pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.412828 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.413091 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.418169 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cnnv\" (UniqueName: \"kubernetes.io/projected/3da7b7ce-1358-4f29-851c-a1a95f1d5a6f-kube-api-access-8cnnv\") pod \"neutron-operator-controller-manager-79d585cb66-27vlc\" (UID: \"3da7b7ce-1358-4f29-851c-a1a95f1d5a6f\") " pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.439834 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.453630 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.455059 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzcvp\" (UniqueName: \"kubernetes.io/projected/a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411-kube-api-access-fzcvp\") pod \"octavia-operator-controller-manager-69fdcfc5f5-2mll5\" (UID: \"a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.459374 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl588\" (UniqueName: \"kubernetes.io/projected/28800e92-f7fd-4764-ab21-b7ea8bd13c48-kube-api-access-fl588\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7br27k2\" (UID: \"28800e92-f7fd-4764-ab21-b7ea8bd13c48\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.459772 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdrsr\" (UniqueName: \"kubernetes.io/projected/314a9915-c5b2-45c6-ad73-17bcf42d80cc-kube-api-access-tdrsr\") pod \"swift-operator-controller-manager-db6d7f97b-4h6v7\" (UID: \"314a9915-c5b2-45c6-ad73-17bcf42d80cc\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.459852 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzw7n\" (UniqueName: \"kubernetes.io/projected/3ad1a6fa-ff96-40e9-ba42-6173bb1639be-kube-api-access-nzw7n\") pod \"ovn-operator-controller-manager-79df5fb58c-pgsds\" (UID: \"3ad1a6fa-ff96-40e9-ba42-6173bb1639be\") " pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.459928 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xsj9\" (UniqueName: \"kubernetes.io/projected/971b1fdb-ddf2-4662-b77f-e3b55ac12de7-kube-api-access-4xsj9\") pod \"placement-operator-controller-manager-68b6c87b68-7nrt2\" (UID: \"971b1fdb-ddf2-4662-b77f-e3b55ac12de7\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.459976 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28800e92-f7fd-4764-ab21-b7ea8bd13c48-cert\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7br27k2\" (UID: \"28800e92-f7fd-4764-ab21-b7ea8bd13c48\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.465414 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.465701 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.470157 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-zhzt8" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.470546 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.492522 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.493747 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.494360 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.498620 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cl75h" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.503075 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.505134 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.506237 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzcvp\" (UniqueName: \"kubernetes.io/projected/a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411-kube-api-access-fzcvp\") pod \"octavia-operator-controller-manager-69fdcfc5f5-2mll5\" (UID: \"a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411\") " pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.508163 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dcgrh" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.515518 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.562107 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.565774 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdrsr\" (UniqueName: \"kubernetes.io/projected/314a9915-c5b2-45c6-ad73-17bcf42d80cc-kube-api-access-tdrsr\") pod \"swift-operator-controller-manager-db6d7f97b-4h6v7\" (UID: \"314a9915-c5b2-45c6-ad73-17bcf42d80cc\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.565819 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mcxl\" (UniqueName: \"kubernetes.io/projected/c2a90822-b5db-4dcd-9bb0-6e6fdc371a49-kube-api-access-9mcxl\") pod \"watcher-operator-controller-manager-7f554bff7b-5tl82\" (UID: \"c2a90822-b5db-4dcd-9bb0-6e6fdc371a49\") " pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.565849 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzw7n\" (UniqueName: \"kubernetes.io/projected/3ad1a6fa-ff96-40e9-ba42-6173bb1639be-kube-api-access-nzw7n\") pod \"ovn-operator-controller-manager-79df5fb58c-pgsds\" (UID: \"3ad1a6fa-ff96-40e9-ba42-6173bb1639be\") " pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.565882 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xsj9\" (UniqueName: \"kubernetes.io/projected/971b1fdb-ddf2-4662-b77f-e3b55ac12de7-kube-api-access-4xsj9\") pod \"placement-operator-controller-manager-68b6c87b68-7nrt2\" (UID: \"971b1fdb-ddf2-4662-b77f-e3b55ac12de7\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.565901 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhpxk\" (UniqueName: \"kubernetes.io/projected/2f220420-4c7f-4f2b-a295-940d7e2f22da-kube-api-access-xhpxk\") pod \"telemetry-operator-controller-manager-67cfc6749b-rdhd7\" (UID: \"2f220420-4c7f-4f2b-a295-940d7e2f22da\") " pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.565931 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28800e92-f7fd-4764-ab21-b7ea8bd13c48-cert\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7br27k2\" (UID: \"28800e92-f7fd-4764-ab21-b7ea8bd13c48\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.565947 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnx7l\" (UniqueName: \"kubernetes.io/projected/cfabcb8e-bad0-4179-81d3-0d6c2a874793-kube-api-access-wnx7l\") pod \"test-operator-controller-manager-5458f77c4-tdbg9\" (UID: \"cfabcb8e-bad0-4179-81d3-0d6c2a874793\") " pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.565981 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl588\" (UniqueName: \"kubernetes.io/projected/28800e92-f7fd-4764-ab21-b7ea8bd13c48-kube-api-access-fl588\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7br27k2\" (UID: \"28800e92-f7fd-4764-ab21-b7ea8bd13c48\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:36 crc kubenswrapper[5016]: E1011 07:53:36.566397 5016 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 11 07:53:36 crc kubenswrapper[5016]: E1011 07:53:36.566622 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28800e92-f7fd-4764-ab21-b7ea8bd13c48-cert podName:28800e92-f7fd-4764-ab21-b7ea8bd13c48 nodeName:}" failed. No retries permitted until 2025-10-11 07:53:37.066539084 +0000 UTC m=+804.966995030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28800e92-f7fd-4764-ab21-b7ea8bd13c48-cert") pod "openstack-baremetal-operator-controller-manager-5956dffb7br27k2" (UID: "28800e92-f7fd-4764-ab21-b7ea8bd13c48") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.569217 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.577619 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.580923 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.581369 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.583212 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-n4jg6" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.583828 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.586510 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.587875 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzw7n\" (UniqueName: \"kubernetes.io/projected/3ad1a6fa-ff96-40e9-ba42-6173bb1639be-kube-api-access-nzw7n\") pod \"ovn-operator-controller-manager-79df5fb58c-pgsds\" (UID: \"3ad1a6fa-ff96-40e9-ba42-6173bb1639be\") " pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.591717 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl588\" (UniqueName: \"kubernetes.io/projected/28800e92-f7fd-4764-ab21-b7ea8bd13c48-kube-api-access-fl588\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7br27k2\" (UID: \"28800e92-f7fd-4764-ab21-b7ea8bd13c48\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.593253 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdrsr\" (UniqueName: \"kubernetes.io/projected/314a9915-c5b2-45c6-ad73-17bcf42d80cc-kube-api-access-tdrsr\") pod \"swift-operator-controller-manager-db6d7f97b-4h6v7\" (UID: \"314a9915-c5b2-45c6-ad73-17bcf42d80cc\") " pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.599422 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.599719 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.604277 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xsj9\" (UniqueName: \"kubernetes.io/projected/971b1fdb-ddf2-4662-b77f-e3b55ac12de7-kube-api-access-4xsj9\") pod \"placement-operator-controller-manager-68b6c87b68-7nrt2\" (UID: \"971b1fdb-ddf2-4662-b77f-e3b55ac12de7\") " pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.634847 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.635837 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.645571 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d4ddw" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.649462 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.669662 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mcxl\" (UniqueName: \"kubernetes.io/projected/c2a90822-b5db-4dcd-9bb0-6e6fdc371a49-kube-api-access-9mcxl\") pod \"watcher-operator-controller-manager-7f554bff7b-5tl82\" (UID: \"c2a90822-b5db-4dcd-9bb0-6e6fdc371a49\") " pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.669718 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1415832f-40ef-48e7-ab66-b556c5110bd0-cert\") pod \"openstack-operator-controller-manager-5b95c8954b-jgt6v\" (UID: \"1415832f-40ef-48e7-ab66-b556c5110bd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.669754 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhpxk\" (UniqueName: \"kubernetes.io/projected/2f220420-4c7f-4f2b-a295-940d7e2f22da-kube-api-access-xhpxk\") pod \"telemetry-operator-controller-manager-67cfc6749b-rdhd7\" (UID: \"2f220420-4c7f-4f2b-a295-940d7e2f22da\") " pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.669795 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnx7l\" (UniqueName: \"kubernetes.io/projected/cfabcb8e-bad0-4179-81d3-0d6c2a874793-kube-api-access-wnx7l\") pod \"test-operator-controller-manager-5458f77c4-tdbg9\" (UID: \"cfabcb8e-bad0-4179-81d3-0d6c2a874793\") " pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.669829 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzp9\" (UniqueName: \"kubernetes.io/projected/1415832f-40ef-48e7-ab66-b556c5110bd0-kube-api-access-pvzp9\") pod \"openstack-operator-controller-manager-5b95c8954b-jgt6v\" (UID: \"1415832f-40ef-48e7-ab66-b556c5110bd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.669861 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx9pv\" (UniqueName: \"kubernetes.io/projected/465a2dcd-76d5-4af4-a791-e98a5dfbd2d4-kube-api-access-hx9pv\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-pwb65\" (UID: \"465a2dcd-76d5-4af4-a791-e98a5dfbd2d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.685558 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.688289 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz"] Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.690451 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.694645 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mcxl\" (UniqueName: \"kubernetes.io/projected/c2a90822-b5db-4dcd-9bb0-6e6fdc371a49-kube-api-access-9mcxl\") pod \"watcher-operator-controller-manager-7f554bff7b-5tl82\" (UID: \"c2a90822-b5db-4dcd-9bb0-6e6fdc371a49\") " pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.700490 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhpxk\" (UniqueName: \"kubernetes.io/projected/2f220420-4c7f-4f2b-a295-940d7e2f22da-kube-api-access-xhpxk\") pod \"telemetry-operator-controller-manager-67cfc6749b-rdhd7\" (UID: \"2f220420-4c7f-4f2b-a295-940d7e2f22da\") " pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.701055 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnx7l\" (UniqueName: \"kubernetes.io/projected/cfabcb8e-bad0-4179-81d3-0d6c2a874793-kube-api-access-wnx7l\") pod \"test-operator-controller-manager-5458f77c4-tdbg9\" (UID: \"cfabcb8e-bad0-4179-81d3-0d6c2a874793\") " pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.765946 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.772535 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.772702 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1415832f-40ef-48e7-ab66-b556c5110bd0-cert\") pod \"openstack-operator-controller-manager-5b95c8954b-jgt6v\" (UID: \"1415832f-40ef-48e7-ab66-b556c5110bd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.772788 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvzp9\" (UniqueName: \"kubernetes.io/projected/1415832f-40ef-48e7-ab66-b556c5110bd0-kube-api-access-pvzp9\") pod \"openstack-operator-controller-manager-5b95c8954b-jgt6v\" (UID: \"1415832f-40ef-48e7-ab66-b556c5110bd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.772823 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx9pv\" (UniqueName: \"kubernetes.io/projected/465a2dcd-76d5-4af4-a791-e98a5dfbd2d4-kube-api-access-hx9pv\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-pwb65\" (UID: \"465a2dcd-76d5-4af4-a791-e98a5dfbd2d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65" Oct 11 07:53:36 crc kubenswrapper[5016]: E1011 07:53:36.773138 5016 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 11 07:53:36 crc kubenswrapper[5016]: E1011 07:53:36.773198 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1415832f-40ef-48e7-ab66-b556c5110bd0-cert podName:1415832f-40ef-48e7-ab66-b556c5110bd0 nodeName:}" failed. No retries permitted until 2025-10-11 07:53:37.273177377 +0000 UTC m=+805.173633433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1415832f-40ef-48e7-ab66-b556c5110bd0-cert") pod "openstack-operator-controller-manager-5b95c8954b-jgt6v" (UID: "1415832f-40ef-48e7-ab66-b556c5110bd0") : secret "webhook-server-cert" not found Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.798290 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvzp9\" (UniqueName: \"kubernetes.io/projected/1415832f-40ef-48e7-ab66-b556c5110bd0-kube-api-access-pvzp9\") pod \"openstack-operator-controller-manager-5b95c8954b-jgt6v\" (UID: \"1415832f-40ef-48e7-ab66-b556c5110bd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.799943 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx9pv\" (UniqueName: \"kubernetes.io/projected/465a2dcd-76d5-4af4-a791-e98a5dfbd2d4-kube-api-access-hx9pv\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-pwb65\" (UID: \"465a2dcd-76d5-4af4-a791-e98a5dfbd2d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.811791 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.827977 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" Oct 11 07:53:36 crc kubenswrapper[5016]: I1011 07:53:36.842744 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.030448 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.068887 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.070445 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.078176 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28800e92-f7fd-4764-ab21-b7ea8bd13c48-cert\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7br27k2\" (UID: \"28800e92-f7fd-4764-ab21-b7ea8bd13c48\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.090350 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28800e92-f7fd-4764-ab21-b7ea8bd13c48-cert\") pod \"openstack-baremetal-operator-controller-manager-5956dffb7br27k2\" (UID: \"28800e92-f7fd-4764-ab21-b7ea8bd13c48\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.170398 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.246041 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.287628 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1415832f-40ef-48e7-ab66-b556c5110bd0-cert\") pod \"openstack-operator-controller-manager-5b95c8954b-jgt6v\" (UID: \"1415832f-40ef-48e7-ab66-b556c5110bd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.293834 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1415832f-40ef-48e7-ab66-b556c5110bd0-cert\") pod \"openstack-operator-controller-manager-5b95c8954b-jgt6v\" (UID: \"1415832f-40ef-48e7-ab66-b556c5110bd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.391570 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms"] Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.408282 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd"] Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.413085 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v"] Oct 11 07:53:37 crc kubenswrapper[5016]: W1011 07:53:37.418487 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53845a5f_9403_4fc4_80b0_56a724bf5405.slice/crio-af92ed0086fff1bf7bb19951d4fba3056b331ede54ff0bef73210343f19bab4a WatchSource:0}: Error finding container af92ed0086fff1bf7bb19951d4fba3056b331ede54ff0bef73210343f19bab4a: Status 404 returned error can't find the container with id af92ed0086fff1bf7bb19951d4fba3056b331ede54ff0bef73210343f19bab4a Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.424388 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd"] Oct 11 07:53:37 crc kubenswrapper[5016]: W1011 07:53:37.428341 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28386d6d_81d2_4b50_8e61_f82bffe1cec5.slice/crio-bebd459c768e7cdc5387a93aa0b9cb10f970aaae0a66dee913764c66aa070428 WatchSource:0}: Error finding container bebd459c768e7cdc5387a93aa0b9cb10f970aaae0a66dee913764c66aa070428: Status 404 returned error can't find the container with id bebd459c768e7cdc5387a93aa0b9cb10f970aaae0a66dee913764c66aa070428 Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.468279 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" event={"ID":"ce574485-559e-47ce-82d5-df9228ee47e9","Type":"ContainerStarted","Data":"c278aa3837018ec4e5ad23e7015e6be40e86af8f746fa1880f7395cd5d0eb3f9"} Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.481471 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" event={"ID":"53845a5f-9403-4fc4-80b0-56a724bf5405","Type":"ContainerStarted","Data":"af92ed0086fff1bf7bb19951d4fba3056b331ede54ff0bef73210343f19bab4a"} Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.489464 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" event={"ID":"6347c5af-b7ed-4498-be85-e9a818a0e0d4","Type":"ContainerStarted","Data":"aadd2c27bfae9335faa96d2c5a763d4fac2b8ed3ab10371f28e9426f83178b91"} Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.496674 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" event={"ID":"28386d6d-81d2-4b50-8e61-f82bffe1cec5","Type":"ContainerStarted","Data":"bebd459c768e7cdc5387a93aa0b9cb10f970aaae0a66dee913764c66aa070428"} Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.500910 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" event={"ID":"642e4a4e-69f3-4bb7-aa0d-55bb7809203a","Type":"ContainerStarted","Data":"501faf29daae4d50012667110bb5ec87bdf1ecf14337f52d005799bfbc17737c"} Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.501064 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9j9jm" podUID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerName="registry-server" containerID="cri-o://23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e" gracePeriod=2 Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.509701 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.558173 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.644176 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl"] Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.653582 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg"] Oct 11 07:53:37 crc kubenswrapper[5016]: W1011 07:53:37.654649 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod683143f6_ebe0_47fb_b6c3_96680e673ff7.slice/crio-1efa660b2d7f429b2e91f7c6f6b496926addedd2a0a66408969aad9a19707c06 WatchSource:0}: Error finding container 1efa660b2d7f429b2e91f7c6f6b496926addedd2a0a66408969aad9a19707c06: Status 404 returned error can't find the container with id 1efa660b2d7f429b2e91f7c6f6b496926addedd2a0a66408969aad9a19707c06 Oct 11 07:53:37 crc kubenswrapper[5016]: I1011 07:53:37.662757 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6"] Oct 11 07:53:37 crc kubenswrapper[5016]: W1011 07:53:37.665820 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1877ae13_d74b_4a7c_9f26_10757d256474.slice/crio-af47aaa10efb2329860375d4077a3927d6aced752678bea02cc887f81edc111c WatchSource:0}: Error finding container af47aaa10efb2329860375d4077a3927d6aced752678bea02cc887f81edc111c: Status 404 returned error can't find the container with id af47aaa10efb2329860375d4077a3927d6aced752678bea02cc887f81edc111c Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.004287 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.129381 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.166997 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.171216 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.215613 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.217052 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-catalog-content\") pod \"bf574e0c-536b-468f-a927-08fdab5f36ff\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.217090 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-utilities\") pod \"bf574e0c-536b-468f-a927-08fdab5f36ff\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.230283 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll2hz\" (UniqueName: \"kubernetes.io/projected/bf574e0c-536b-468f-a927-08fdab5f36ff-kube-api-access-ll2hz\") pod \"bf574e0c-536b-468f-a927-08fdab5f36ff\" (UID: \"bf574e0c-536b-468f-a927-08fdab5f36ff\") " Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.222561 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-utilities" (OuterVolumeSpecName: "utilities") pod "bf574e0c-536b-468f-a927-08fdab5f36ff" (UID: "bf574e0c-536b-468f-a927-08fdab5f36ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.234009 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.244030 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.249800 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.255711 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf574e0c-536b-468f-a927-08fdab5f36ff-kube-api-access-ll2hz" (OuterVolumeSpecName: "kube-api-access-ll2hz") pod "bf574e0c-536b-468f-a927-08fdab5f36ff" (UID: "bf574e0c-536b-468f-a927-08fdab5f36ff"). InnerVolumeSpecName "kube-api-access-ll2hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.260961 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.265726 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.268521 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.276426 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96"] Oct 11 07:53:38 crc kubenswrapper[5016]: W1011 07:53:38.277338 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ad1a6fa_ff96_40e9_ba42_6173bb1639be.slice/crio-4ada976cead971ce5a8ca7856c45828aeed50f48c26a31f8c71e76a03c54c2b1 WatchSource:0}: Error finding container 4ada976cead971ce5a8ca7856c45828aeed50f48c26a31f8c71e76a03c54c2b1: Status 404 returned error can't find the container with id 4ada976cead971ce5a8ca7856c45828aeed50f48c26a31f8c71e76a03c54c2b1 Oct 11 07:53:38 crc kubenswrapper[5016]: W1011 07:53:38.277678 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da7b7ce_1358_4f29_851c_a1a95f1d5a6f.slice/crio-5df65a2f1c5baa22686ba045ddc3c45c4e83ec7697a13084f8f5fb497f55f37e WatchSource:0}: Error finding container 5df65a2f1c5baa22686ba045ddc3c45c4e83ec7697a13084f8f5fb497f55f37e: Status 404 returned error can't find the container with id 5df65a2f1c5baa22686ba045ddc3c45c4e83ec7697a13084f8f5fb497f55f37e Oct 11 07:53:38 crc kubenswrapper[5016]: W1011 07:53:38.279080 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f220420_4c7f_4f2b_a295_940d7e2f22da.slice/crio-3e181016bd27915181f3f86793ad0b7cf4bb4cd7711b2a6ff2c5e7549ff310f1 WatchSource:0}: Error finding container 3e181016bd27915181f3f86793ad0b7cf4bb4cd7711b2a6ff2c5e7549ff310f1: Status 404 returned error can't find the container with id 3e181016bd27915181f3f86793ad0b7cf4bb4cd7711b2a6ff2c5e7549ff310f1 Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.280591 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-scgsv"] Oct 11 07:53:38 crc kubenswrapper[5016]: W1011 07:53:38.282396 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4fb3be7_4bcb_4f6a_a1e8_619b47cbc411.slice/crio-52573f88c4b054f94b02c8e5022f9cb80964b1a7ec1913467f8a65a7c7d9d824 WatchSource:0}: Error finding container 52573f88c4b054f94b02c8e5022f9cb80964b1a7ec1913467f8a65a7c7d9d824: Status 404 returned error can't find the container with id 52573f88c4b054f94b02c8e5022f9cb80964b1a7ec1913467f8a65a7c7d9d824 Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.289117 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7"] Oct 11 07:53:38 crc kubenswrapper[5016]: W1011 07:53:38.294574 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod971b1fdb_ddf2_4662_b77f_e3b55ac12de7.slice/crio-feb0eb00b8ab504f3a6b4f6095fbd82ec97f3d804b96e650054c7d0e76db7eb0 WatchSource:0}: Error finding container feb0eb00b8ab504f3a6b4f6095fbd82ec97f3d804b96e650054c7d0e76db7eb0: Status 404 returned error can't find the container with id feb0eb00b8ab504f3a6b4f6095fbd82ec97f3d804b96e650054c7d0e76db7eb0 Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.300860 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.301124 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf574e0c-536b-468f-a927-08fdab5f36ff" (UID: "bf574e0c-536b-468f-a927-08fdab5f36ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:53:38 crc kubenswrapper[5016]: W1011 07:53:38.310490 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc146e268_2093_47e5_aaa1_824de389d97a.slice/crio-a0df8b381bbc0310c186f4e3d21cf98a7da9f20f2a0714aea505162ddd6185e2 WatchSource:0}: Error finding container a0df8b381bbc0310c186f4e3d21cf98a7da9f20f2a0714aea505162ddd6185e2: Status 404 returned error can't find the container with id a0df8b381bbc0310c186f4e3d21cf98a7da9f20f2a0714aea505162ddd6185e2 Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.310779 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v"] Oct 11 07:53:38 crc kubenswrapper[5016]: E1011 07:53:38.310869 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4xsj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-68b6c87b68-7nrt2_openstack-operators(971b1fdb-ddf2-4662-b77f-e3b55ac12de7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 07:53:38 crc kubenswrapper[5016]: E1011 07:53:38.310876 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tdrsr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-db6d7f97b-4h6v7_openstack-operators(314a9915-c5b2-45c6-ad73-17bcf42d80cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 07:53:38 crc kubenswrapper[5016]: E1011 07:53:38.312125 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wnx7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5458f77c4-tdbg9_openstack-operators(cfabcb8e-bad0-4179-81d3-0d6c2a874793): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.313272 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5"] Oct 11 07:53:38 crc kubenswrapper[5016]: E1011 07:53:38.316360 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggqqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-f9fb45f8f-txvhv_openstack-operators(c146e268-2093-47e5-aaa1-824de389d97a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 07:53:38 crc kubenswrapper[5016]: E1011 07:53:38.317708 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-clxw5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55b6b7c7b8-lcm96_openstack-operators(e2954d6f-57ba-49c4-ac53-7aa4600cf1b2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.320084 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22"] Oct 11 07:53:38 crc kubenswrapper[5016]: E1011 07:53:38.330135 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4khrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-9c5c78d49-d5p22_openstack-operators(b53501aa-b72c-457d-ad20-1f57abd81645): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 07:53:38 crc kubenswrapper[5016]: E1011 07:53:38.330206 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82vzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-656bcbd775-q4rcq_openstack-operators(f36d2ba0-eaa2-48d4-8367-3b718a86b54a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.334623 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll2hz\" (UniqueName: \"kubernetes.io/projected/bf574e0c-536b-468f-a927-08fdab5f36ff-kube-api-access-ll2hz\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.334663 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf574e0c-536b-468f-a927-08fdab5f36ff-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.506813 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" event={"ID":"e2954d6f-57ba-49c4-ac53-7aa4600cf1b2","Type":"ContainerStarted","Data":"6d73ff65fee20f9741d7366bb40599d8ca4e64e38375ed3d20779805bd5df4a1"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.508146 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65" event={"ID":"465a2dcd-76d5-4af4-a791-e98a5dfbd2d4","Type":"ContainerStarted","Data":"8dbc10428e1d4749ea10c46269f922c16e0fd248f6c80904a2943b36c2bee7a1"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.509237 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" event={"ID":"3da7b7ce-1358-4f29-851c-a1a95f1d5a6f","Type":"ContainerStarted","Data":"5df65a2f1c5baa22686ba045ddc3c45c4e83ec7697a13084f8f5fb497f55f37e"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.510224 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" event={"ID":"c2a90822-b5db-4dcd-9bb0-6e6fdc371a49","Type":"ContainerStarted","Data":"9ee6c778e2352ec9dcd5a93942a39e6bded294d570f89816500f77965582a43c"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.511890 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" event={"ID":"b53501aa-b72c-457d-ad20-1f57abd81645","Type":"ContainerStarted","Data":"e8ac484d634dd40a84fb7c58c6089e10be947f20c3eeb0ba055c9f1bfca334dd"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.512674 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" event={"ID":"a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411","Type":"ContainerStarted","Data":"52573f88c4b054f94b02c8e5022f9cb80964b1a7ec1913467f8a65a7c7d9d824"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.513882 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" event={"ID":"683143f6-ebe0-47fb-b6c3-96680e673ff7","Type":"ContainerStarted","Data":"1efa660b2d7f429b2e91f7c6f6b496926addedd2a0a66408969aad9a19707c06"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.514797 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" event={"ID":"2f220420-4c7f-4f2b-a295-940d7e2f22da","Type":"ContainerStarted","Data":"3e181016bd27915181f3f86793ad0b7cf4bb4cd7711b2a6ff2c5e7549ff310f1"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.517731 5016 generic.go:334] "Generic (PLEG): container finished" podID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerID="23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e" exitCode=0 Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.517756 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9j9jm" event={"ID":"bf574e0c-536b-468f-a927-08fdab5f36ff","Type":"ContainerDied","Data":"23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.517783 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9j9jm" event={"ID":"bf574e0c-536b-468f-a927-08fdab5f36ff","Type":"ContainerDied","Data":"526590aede1ce43ee201c4647e5fd9fcab717e3427cc46d2e9abb8bdc72d70a3"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.517803 5016 scope.go:117] "RemoveContainer" containerID="23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.517837 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9j9jm" Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.520682 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" event={"ID":"971b1fdb-ddf2-4662-b77f-e3b55ac12de7","Type":"ContainerStarted","Data":"feb0eb00b8ab504f3a6b4f6095fbd82ec97f3d804b96e650054c7d0e76db7eb0"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.522257 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" event={"ID":"1877ae13-d74b-4a7c-9f26-10757d256474","Type":"ContainerStarted","Data":"af47aaa10efb2329860375d4077a3927d6aced752678bea02cc887f81edc111c"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.523098 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" event={"ID":"81fc8139-b3e6-4aa4-a2a3-3488428fdd67","Type":"ContainerStarted","Data":"21891e3edbf2092489b0078e770f080ec8743bfba678290d9d19dbaf4fa60364"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.524064 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" event={"ID":"314a9915-c5b2-45c6-ad73-17bcf42d80cc","Type":"ContainerStarted","Data":"62c833f3e3529d4df616f27d92940a8f845296c21ae84965cab11b5a940c282c"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.525079 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" event={"ID":"cfabcb8e-bad0-4179-81d3-0d6c2a874793","Type":"ContainerStarted","Data":"b5f048fe6a6f08ac809c49ba01831c9421492ebd20f77561bb15b3ecae68b2b6"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.525904 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" event={"ID":"1415832f-40ef-48e7-ab66-b556c5110bd0","Type":"ContainerStarted","Data":"920ecdb17b67285765d6c3d09445d9d51140c64350659ee0e32bfc0fdc3d668a"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.527225 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" event={"ID":"3ad1a6fa-ff96-40e9-ba42-6173bb1639be","Type":"ContainerStarted","Data":"4ada976cead971ce5a8ca7856c45828aeed50f48c26a31f8c71e76a03c54c2b1"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.528397 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" event={"ID":"f36d2ba0-eaa2-48d4-8367-3b718a86b54a","Type":"ContainerStarted","Data":"f6545d70a2b039c713a145e386174b3b0244d48dab33d6d16426c3a930550331"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.530508 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" event={"ID":"c146e268-2093-47e5-aaa1-824de389d97a","Type":"ContainerStarted","Data":"a0df8b381bbc0310c186f4e3d21cf98a7da9f20f2a0714aea505162ddd6185e2"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.535104 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" event={"ID":"28800e92-f7fd-4764-ab21-b7ea8bd13c48","Type":"ContainerStarted","Data":"63f0fc6d9249a3f7853b6ba55c2c17b86e0da14e610b8f447ec43971cce9f7ba"} Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.557301 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9j9jm"] Oct 11 07:53:38 crc kubenswrapper[5016]: I1011 07:53:38.565346 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9j9jm"] Oct 11 07:53:39 crc kubenswrapper[5016]: I1011 07:53:39.140717 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf574e0c-536b-468f-a927-08fdab5f36ff" path="/var/lib/kubelet/pods/bf574e0c-536b-468f-a927-08fdab5f36ff/volumes" Oct 11 07:53:39 crc kubenswrapper[5016]: E1011 07:53:39.212739 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" Oct 11 07:53:39 crc kubenswrapper[5016]: I1011 07:53:39.545785 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" event={"ID":"1415832f-40ef-48e7-ab66-b556c5110bd0","Type":"ContainerStarted","Data":"7fb276cf8dce05a87e0a12a20f5d60d78733885ca1121c80b4dce34572958305"} Oct 11 07:53:39 crc kubenswrapper[5016]: I1011 07:53:39.547358 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-scgsv" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerName="registry-server" containerID="cri-o://48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024" gracePeriod=2 Oct 11 07:53:39 crc kubenswrapper[5016]: I1011 07:53:39.548132 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" event={"ID":"cfabcb8e-bad0-4179-81d3-0d6c2a874793","Type":"ContainerStarted","Data":"98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d"} Oct 11 07:53:39 crc kubenswrapper[5016]: E1011 07:53:39.550599 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a\\\"\"" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" Oct 11 07:53:40 crc kubenswrapper[5016]: E1011 07:53:40.431204 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" podUID="314a9915-c5b2-45c6-ad73-17bcf42d80cc" Oct 11 07:53:40 crc kubenswrapper[5016]: I1011 07:53:40.556566 5016 generic.go:334] "Generic (PLEG): container finished" podID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerID="48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024" exitCode=0 Oct 11 07:53:40 crc kubenswrapper[5016]: I1011 07:53:40.556669 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scgsv" event={"ID":"1861edc2-2e2d-4ff1-b991-48723e355c31","Type":"ContainerDied","Data":"48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024"} Oct 11 07:53:40 crc kubenswrapper[5016]: I1011 07:53:40.558041 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" event={"ID":"314a9915-c5b2-45c6-ad73-17bcf42d80cc","Type":"ContainerStarted","Data":"e6962d3bfd4a67e509d2372a288b46552ba2518226d7bd1ccd3c5def74f8181e"} Oct 11 07:53:40 crc kubenswrapper[5016]: E1011 07:53:40.559937 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" podUID="314a9915-c5b2-45c6-ad73-17bcf42d80cc" Oct 11 07:53:40 crc kubenswrapper[5016]: E1011 07:53:40.561550 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a\\\"\"" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" Oct 11 07:53:40 crc kubenswrapper[5016]: I1011 07:53:40.793004 5016 scope.go:117] "RemoveContainer" containerID="46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7" Oct 11 07:53:41 crc kubenswrapper[5016]: E1011 07:53:41.566241 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" podUID="314a9915-c5b2-45c6-ad73-17bcf42d80cc" Oct 11 07:53:46 crc kubenswrapper[5016]: E1011 07:53:46.753039 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" podUID="f36d2ba0-eaa2-48d4-8367-3b718a86b54a" Oct 11 07:53:47 crc kubenswrapper[5016]: E1011 07:53:47.062303 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024 is running failed: container process not found" containerID="48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 07:53:47 crc kubenswrapper[5016]: E1011 07:53:47.062903 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024 is running failed: container process not found" containerID="48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 07:53:47 crc kubenswrapper[5016]: E1011 07:53:47.063581 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024 is running failed: container process not found" containerID="48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 07:53:47 crc kubenswrapper[5016]: E1011 07:53:47.063675 5016 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-scgsv" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerName="registry-server" Oct 11 07:53:47 crc kubenswrapper[5016]: I1011 07:53:47.612299 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" event={"ID":"f36d2ba0-eaa2-48d4-8367-3b718a86b54a","Type":"ContainerStarted","Data":"78ac38eb1511464fdc1de135fab332a898b9b009302ed4f4cc1adce7d0c706d7"} Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.270749 5016 scope.go:117] "RemoveContainer" containerID="11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.358607 5016 scope.go:117] "RemoveContainer" containerID="23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e" Oct 11 07:53:50 crc kubenswrapper[5016]: E1011 07:53:50.359139 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e\": container with ID starting with 23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e not found: ID does not exist" containerID="23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.359206 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e"} err="failed to get container status \"23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e\": rpc error: code = NotFound desc = could not find container \"23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e\": container with ID starting with 23198209a14dc35787c01768eb9093feeb61d2833e0e1b04693404e5278dd52e not found: ID does not exist" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.359247 5016 scope.go:117] "RemoveContainer" containerID="46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7" Oct 11 07:53:50 crc kubenswrapper[5016]: E1011 07:53:50.360941 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7\": container with ID starting with 46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7 not found: ID does not exist" containerID="46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.360981 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7"} err="failed to get container status \"46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7\": rpc error: code = NotFound desc = could not find container \"46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7\": container with ID starting with 46806825913e803a203d7e5a5397d4b1b102f10b9cd49c909de6815e9f1fb9a7 not found: ID does not exist" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.361002 5016 scope.go:117] "RemoveContainer" containerID="11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3" Oct 11 07:53:50 crc kubenswrapper[5016]: E1011 07:53:50.362164 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3\": container with ID starting with 11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3 not found: ID does not exist" containerID="11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.362216 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3"} err="failed to get container status \"11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3\": rpc error: code = NotFound desc = could not find container \"11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3\": container with ID starting with 11a0806c48cafe5701a3adc6b788b03215ea3735f80b0c57c044e6d710506ea3 not found: ID does not exist" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.439926 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.526134 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-utilities\") pod \"1861edc2-2e2d-4ff1-b991-48723e355c31\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.526224 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqkpc\" (UniqueName: \"kubernetes.io/projected/1861edc2-2e2d-4ff1-b991-48723e355c31-kube-api-access-fqkpc\") pod \"1861edc2-2e2d-4ff1-b991-48723e355c31\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.526322 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-catalog-content\") pod \"1861edc2-2e2d-4ff1-b991-48723e355c31\" (UID: \"1861edc2-2e2d-4ff1-b991-48723e355c31\") " Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.527837 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-utilities" (OuterVolumeSpecName: "utilities") pod "1861edc2-2e2d-4ff1-b991-48723e355c31" (UID: "1861edc2-2e2d-4ff1-b991-48723e355c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.535533 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1861edc2-2e2d-4ff1-b991-48723e355c31-kube-api-access-fqkpc" (OuterVolumeSpecName: "kube-api-access-fqkpc") pod "1861edc2-2e2d-4ff1-b991-48723e355c31" (UID: "1861edc2-2e2d-4ff1-b991-48723e355c31"). InnerVolumeSpecName "kube-api-access-fqkpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.569410 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1861edc2-2e2d-4ff1-b991-48723e355c31" (UID: "1861edc2-2e2d-4ff1-b991-48723e355c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:53:50 crc kubenswrapper[5016]: E1011 07:53:50.619816 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" podUID="b53501aa-b72c-457d-ad20-1f57abd81645" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.627573 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.627616 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1861edc2-2e2d-4ff1-b991-48723e355c31-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.627625 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqkpc\" (UniqueName: \"kubernetes.io/projected/1861edc2-2e2d-4ff1-b991-48723e355c31-kube-api-access-fqkpc\") on node \"crc\" DevicePath \"\"" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.651748 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" event={"ID":"b53501aa-b72c-457d-ad20-1f57abd81645","Type":"ContainerStarted","Data":"596c6438a91708cf1d55e1a959a0cf1f97c3d85dd2e0f7ba916c54fbb7bec84e"} Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.657901 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-scgsv" event={"ID":"1861edc2-2e2d-4ff1-b991-48723e355c31","Type":"ContainerDied","Data":"18b3763d90a410b78551a8147eaabb8471119b02dec312e124bfd589090aa25b"} Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.657942 5016 scope.go:117] "RemoveContainer" containerID="48afc95b8daffa63d82a3b2dedc366a0031b999a9dc3cb3421c597b2639f5024" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.657934 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-scgsv" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.684629 5016 scope.go:117] "RemoveContainer" containerID="b50c2258777b8bf20a2c26b725953d2648a1d0a7d0a13cacaacfe6a31b3ed247" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.753549 5016 scope.go:117] "RemoveContainer" containerID="2ddb25a1f89801ca42dfd44bf7f1ce8e454bf7a6bf9e14ae9330c0fa0fdbc320" Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.762698 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-scgsv"] Oct 11 07:53:50 crc kubenswrapper[5016]: I1011 07:53:50.771889 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-scgsv"] Oct 11 07:53:50 crc kubenswrapper[5016]: E1011 07:53:50.814279 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" podUID="e2954d6f-57ba-49c4-ac53-7aa4600cf1b2" Oct 11 07:53:50 crc kubenswrapper[5016]: E1011 07:53:50.832732 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" podUID="971b1fdb-ddf2-4662-b77f-e3b55ac12de7" Oct 11 07:53:50 crc kubenswrapper[5016]: E1011 07:53:50.881804 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" podUID="c146e268-2093-47e5-aaa1-824de389d97a" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.174116 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" path="/var/lib/kubelet/pods/1861edc2-2e2d-4ff1-b991-48723e355c31/volumes" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.716741 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" event={"ID":"53845a5f-9403-4fc4-80b0-56a724bf5405","Type":"ContainerStarted","Data":"9afa1da64211b75211af847f2964309d9d1bb75f59563bf8670f4125456b3ba9"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.728010 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65" event={"ID":"465a2dcd-76d5-4af4-a791-e98a5dfbd2d4","Type":"ContainerStarted","Data":"9d66194fcda86cd71c198fc8986d8b8490b6dcf4b0f3c36d7fcaf7b1a8f1792e"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.730075 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" event={"ID":"a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411","Type":"ContainerStarted","Data":"65980eca0a4dc2b4599633a305b6aba6767c8e646e3a9fd3b5e3e5c5c691d60f"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.736741 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" event={"ID":"81fc8139-b3e6-4aa4-a2a3-3488428fdd67","Type":"ContainerStarted","Data":"83a90297b977e30a992b1c82420b40bc0057bb715ce6cc64838ed6c2f72041f9"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.758970 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" event={"ID":"28386d6d-81d2-4b50-8e61-f82bffe1cec5","Type":"ContainerStarted","Data":"b8ebeb9672681d83528d97bc51fcb24d17426f9f1d43eaf40a5ac1a5f2662425"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.759018 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" event={"ID":"28386d6d-81d2-4b50-8e61-f82bffe1cec5","Type":"ContainerStarted","Data":"1381044e271d9de1f498452710332d4989cf202bd156ac7c3fc6bff0d39db9eb"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.759676 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.764342 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-pwb65" podStartSLOduration=3.58646895 podStartE2EDuration="15.764322853s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.180950401 +0000 UTC m=+806.081406337" lastFinishedPulling="2025-10-11 07:53:50.358804284 +0000 UTC m=+818.259260240" observedRunningTime="2025-10-11 07:53:51.759871683 +0000 UTC m=+819.660327629" watchObservedRunningTime="2025-10-11 07:53:51.764322853 +0000 UTC m=+819.664778799" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.767423 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" event={"ID":"28800e92-f7fd-4764-ab21-b7ea8bd13c48","Type":"ContainerStarted","Data":"c237c1d68e2e8a3730b06207633808329883e846ae94b6bf599fb93c61521136"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.767470 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" event={"ID":"28800e92-f7fd-4764-ab21-b7ea8bd13c48","Type":"ContainerStarted","Data":"c71ee11f2b47b79a2a27d73df6459cddc902120dd5ab4b03f7681cb9a3a6b3f1"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.768122 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.786351 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" event={"ID":"6347c5af-b7ed-4498-be85-e9a818a0e0d4","Type":"ContainerStarted","Data":"c9516310337fb155354e762cbe47281c15f5ed5b07f4a8432d35f1f2e6e86f0b"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.786399 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" event={"ID":"6347c5af-b7ed-4498-be85-e9a818a0e0d4","Type":"ContainerStarted","Data":"175003fccec015a7f9a710e74620655401ae608b2757b161d8fffeaf6a0aed89"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.787323 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.798663 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" event={"ID":"3ad1a6fa-ff96-40e9-ba42-6173bb1639be","Type":"ContainerStarted","Data":"7e061aee7361e4be8e2a72dbbb40dd735e271b47f30acee197d6295ea1f37dd1"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.798707 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" event={"ID":"3ad1a6fa-ff96-40e9-ba42-6173bb1639be","Type":"ContainerStarted","Data":"b9373295c23083f7114adee0d8b2e1a45d9bdfdfa0a366b9a25f08f878642998"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.799434 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.802370 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" event={"ID":"3da7b7ce-1358-4f29-851c-a1a95f1d5a6f","Type":"ContainerStarted","Data":"adf6bdaf12029dafcda98a6a30b5d0bab0f0ad91ca336ce3ffcb7cd325a8b9ec"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.806831 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" podStartSLOduration=3.933406036 podStartE2EDuration="16.80681286s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:37.430395194 +0000 UTC m=+805.330851150" lastFinishedPulling="2025-10-11 07:53:50.303802038 +0000 UTC m=+818.204257974" observedRunningTime="2025-10-11 07:53:51.798319251 +0000 UTC m=+819.698775197" watchObservedRunningTime="2025-10-11 07:53:51.80681286 +0000 UTC m=+819.707268806" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.825104 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" event={"ID":"1877ae13-d74b-4a7c-9f26-10757d256474","Type":"ContainerStarted","Data":"dd6cc6e01876a827162bcf18df4c3960e4388d725400cf8fc08f16d404a88ebf"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.833874 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" event={"ID":"642e4a4e-69f3-4bb7-aa0d-55bb7809203a","Type":"ContainerStarted","Data":"d2dfc501c3721f616ad0ffa6f50a99d510afd9a8103dd16ed21dc4d85f763b9a"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.835184 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" event={"ID":"ce574485-559e-47ce-82d5-df9228ee47e9","Type":"ContainerStarted","Data":"59f705b191f92912dc5638ab43bf83a6ea5d299ef5bb9f3d0d3cb262b9e4c0a3"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.836711 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" event={"ID":"c146e268-2093-47e5-aaa1-824de389d97a","Type":"ContainerStarted","Data":"946ce38441bd56b0d8977da2a2089b803e43e054878be314382bbefd8aecc932"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.842333 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" event={"ID":"971b1fdb-ddf2-4662-b77f-e3b55ac12de7","Type":"ContainerStarted","Data":"44c1672b9510c4f9403bc51519bd2a1f314b8eaab743187a6e8c08810d905813"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.850936 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" podStartSLOduration=3.767449101 podStartE2EDuration="15.850916217s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.221360137 +0000 UTC m=+806.121816083" lastFinishedPulling="2025-10-11 07:53:50.304827253 +0000 UTC m=+818.205283199" observedRunningTime="2025-10-11 07:53:51.831636061 +0000 UTC m=+819.732092007" watchObservedRunningTime="2025-10-11 07:53:51.850916217 +0000 UTC m=+819.751372163" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.865030 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" podStartSLOduration=3.467883013 podStartE2EDuration="16.865009594s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:36.810201569 +0000 UTC m=+804.710657515" lastFinishedPulling="2025-10-11 07:53:50.20732815 +0000 UTC m=+818.107784096" observedRunningTime="2025-10-11 07:53:51.852015414 +0000 UTC m=+819.752471360" watchObservedRunningTime="2025-10-11 07:53:51.865009594 +0000 UTC m=+819.765465540" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.873675 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" podStartSLOduration=3.796856335 podStartE2EDuration="15.873645477s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.282172666 +0000 UTC m=+806.182628612" lastFinishedPulling="2025-10-11 07:53:50.358961808 +0000 UTC m=+818.259417754" observedRunningTime="2025-10-11 07:53:51.871606027 +0000 UTC m=+819.772061973" watchObservedRunningTime="2025-10-11 07:53:51.873645477 +0000 UTC m=+819.774101423" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.875338 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" event={"ID":"1415832f-40ef-48e7-ab66-b556c5110bd0","Type":"ContainerStarted","Data":"a48340a1b6e672afc29fefcf5c520f26f8a21c78acfdf6664e9beeb909367679"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.876107 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.882198 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.882225 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" event={"ID":"2f220420-4c7f-4f2b-a295-940d7e2f22da","Type":"ContainerStarted","Data":"e97e0d712012302fb83d787778dafe7688c8f2b287b87e06d34e02a82b215f81"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.883861 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" event={"ID":"683143f6-ebe0-47fb-b6c3-96680e673ff7","Type":"ContainerStarted","Data":"c342c6e6417cf9bb1d93c1631f259946a795e36ad20a09d83565274fd66636f0"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.885775 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" event={"ID":"c2a90822-b5db-4dcd-9bb0-6e6fdc371a49","Type":"ContainerStarted","Data":"2499961cf367dfa6c78dede7368a274169a184c7054f5345f2f2a80143f67139"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.892190 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" event={"ID":"e2954d6f-57ba-49c4-ac53-7aa4600cf1b2","Type":"ContainerStarted","Data":"61562a8a7035d52ec2f121b161cd12b89dd973751c9d2d89191deb66323d5138"} Oct 11 07:53:51 crc kubenswrapper[5016]: I1011 07:53:51.952717 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b95c8954b-jgt6v" podStartSLOduration=15.952698496 podStartE2EDuration="15.952698496s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:53:51.951634629 +0000 UTC m=+819.852090575" watchObservedRunningTime="2025-10-11 07:53:51.952698496 +0000 UTC m=+819.853154442" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.722800 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4kwr"] Oct 11 07:53:52 crc kubenswrapper[5016]: E1011 07:53:52.723207 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerName="extract-utilities" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.723224 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerName="extract-utilities" Oct 11 07:53:52 crc kubenswrapper[5016]: E1011 07:53:52.723247 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerName="extract-content" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.723254 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerName="extract-content" Oct 11 07:53:52 crc kubenswrapper[5016]: E1011 07:53:52.723292 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerName="registry-server" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.723301 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerName="registry-server" Oct 11 07:53:52 crc kubenswrapper[5016]: E1011 07:53:52.723332 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerName="extract-content" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.723340 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerName="extract-content" Oct 11 07:53:52 crc kubenswrapper[5016]: E1011 07:53:52.723355 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerName="extract-utilities" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.723362 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerName="extract-utilities" Oct 11 07:53:52 crc kubenswrapper[5016]: E1011 07:53:52.723377 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerName="registry-server" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.723386 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerName="registry-server" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.723581 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="1861edc2-2e2d-4ff1-b991-48723e355c31" containerName="registry-server" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.723623 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf574e0c-536b-468f-a927-08fdab5f36ff" containerName="registry-server" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.724738 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.741584 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4kwr"] Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.764213 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-utilities\") pod \"redhat-marketplace-p4kwr\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.764264 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-catalog-content\") pod \"redhat-marketplace-p4kwr\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.764290 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkc5p\" (UniqueName: \"kubernetes.io/projected/138ff19f-4af1-4d14-955a-6356b93cb6dd-kube-api-access-dkc5p\") pod \"redhat-marketplace-p4kwr\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.865199 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-utilities\") pod \"redhat-marketplace-p4kwr\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.865554 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-catalog-content\") pod \"redhat-marketplace-p4kwr\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.865579 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkc5p\" (UniqueName: \"kubernetes.io/projected/138ff19f-4af1-4d14-955a-6356b93cb6dd-kube-api-access-dkc5p\") pod \"redhat-marketplace-p4kwr\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.865785 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-utilities\") pod \"redhat-marketplace-p4kwr\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.866029 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-catalog-content\") pod \"redhat-marketplace-p4kwr\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.887893 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkc5p\" (UniqueName: \"kubernetes.io/projected/138ff19f-4af1-4d14-955a-6356b93cb6dd-kube-api-access-dkc5p\") pod \"redhat-marketplace-p4kwr\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.922707 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" event={"ID":"1877ae13-d74b-4a7c-9f26-10757d256474","Type":"ContainerStarted","Data":"64f29aea43a5c08925869b70fdfe40297976877c2a5d026012e8f8cb873f1c40"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.923864 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.939690 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" event={"ID":"a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411","Type":"ContainerStarted","Data":"996464df8b80abd05f68646be993438e05e3d670e0a61a57987d1977f4ba149b"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.940036 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.943064 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" podStartSLOduration=5.337096089 podStartE2EDuration="17.943047372s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:37.669061346 +0000 UTC m=+805.569517282" lastFinishedPulling="2025-10-11 07:53:50.275012619 +0000 UTC m=+818.175468565" observedRunningTime="2025-10-11 07:53:52.939842604 +0000 UTC m=+820.840298550" watchObservedRunningTime="2025-10-11 07:53:52.943047372 +0000 UTC m=+820.843503318" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.960846 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" event={"ID":"642e4a4e-69f3-4bb7-aa0d-55bb7809203a","Type":"ContainerStarted","Data":"b2ba59c4222a215e72f74c196c61605cbef0b8a5e33917b11192616121eca395"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.961349 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.966913 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" podStartSLOduration=4.9621536840000005 podStartE2EDuration="16.96689687s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.300476087 +0000 UTC m=+806.200932033" lastFinishedPulling="2025-10-11 07:53:50.305219263 +0000 UTC m=+818.205675219" observedRunningTime="2025-10-11 07:53:52.963750633 +0000 UTC m=+820.864206579" watchObservedRunningTime="2025-10-11 07:53:52.96689687 +0000 UTC m=+820.867352816" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.968733 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" event={"ID":"53845a5f-9403-4fc4-80b0-56a724bf5405","Type":"ContainerStarted","Data":"f5f648da07d07bc841f04d3df3c6823ae3f8ad3e8f375b1072a76f51ee70c726"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.968847 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.973408 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" event={"ID":"2f220420-4c7f-4f2b-a295-940d7e2f22da","Type":"ContainerStarted","Data":"44d0b4821f545d467eda37aac2c5506289239af2826bec4936e8a31864a434fa"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.973540 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.976317 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" event={"ID":"3da7b7ce-1358-4f29-851c-a1a95f1d5a6f","Type":"ContainerStarted","Data":"77a9b117f0382c41db14e290ca6b62a3e35e9008edafd666a0e263e08770f46f"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.976472 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.981262 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" podStartSLOduration=5.116673018 podStartE2EDuration="17.981251425s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:37.413341444 +0000 UTC m=+805.313797390" lastFinishedPulling="2025-10-11 07:53:50.277919821 +0000 UTC m=+818.178375797" observedRunningTime="2025-10-11 07:53:52.979104481 +0000 UTC m=+820.879560427" watchObservedRunningTime="2025-10-11 07:53:52.981251425 +0000 UTC m=+820.881707371" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.983293 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" event={"ID":"c2a90822-b5db-4dcd-9bb0-6e6fdc371a49","Type":"ContainerStarted","Data":"6857e77d7101cdb58dd4437f0c189234d62b237dbacdd352388c783e627fb061"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.983477 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.987452 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" event={"ID":"81fc8139-b3e6-4aa4-a2a3-3488428fdd67","Type":"ContainerStarted","Data":"b0d7c60d8b20c35ed00b431eedb94e8ac91383798036fdaf30eb43b539a4ff19"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.987910 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.989919 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" event={"ID":"ce574485-559e-47ce-82d5-df9228ee47e9","Type":"ContainerStarted","Data":"7352b81090512e5812b0e0f0e12a014c5cd54f19a3a1d43d3e2450c7315f2fbf"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.990287 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.998255 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" event={"ID":"683143f6-ebe0-47fb-b6c3-96680e673ff7","Type":"ContainerStarted","Data":"fec0c90eb08867295c4a8acf87110f0c704530092fb748ce5b19fe85510ab9c5"} Oct 11 07:53:52 crc kubenswrapper[5016]: I1011 07:53:52.998472 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" podStartSLOduration=5.147579069 podStartE2EDuration="17.998459008s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:37.424217382 +0000 UTC m=+805.324673328" lastFinishedPulling="2025-10-11 07:53:50.275097281 +0000 UTC m=+818.175553267" observedRunningTime="2025-10-11 07:53:52.994749237 +0000 UTC m=+820.895205183" watchObservedRunningTime="2025-10-11 07:53:52.998459008 +0000 UTC m=+820.898914954" Oct 11 07:53:53 crc kubenswrapper[5016]: I1011 07:53:53.020285 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" podStartSLOduration=5.988211496 podStartE2EDuration="18.020267945s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.284971125 +0000 UTC m=+806.185427071" lastFinishedPulling="2025-10-11 07:53:50.317027554 +0000 UTC m=+818.217483520" observedRunningTime="2025-10-11 07:53:53.014662317 +0000 UTC m=+820.915118263" watchObservedRunningTime="2025-10-11 07:53:53.020267945 +0000 UTC m=+820.920723891" Oct 11 07:53:53 crc kubenswrapper[5016]: I1011 07:53:53.037672 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" podStartSLOduration=4.963339703 podStartE2EDuration="17.037638084s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.284525114 +0000 UTC m=+806.184981060" lastFinishedPulling="2025-10-11 07:53:50.358823475 +0000 UTC m=+818.259279441" observedRunningTime="2025-10-11 07:53:53.035604853 +0000 UTC m=+820.936060799" watchObservedRunningTime="2025-10-11 07:53:53.037638084 +0000 UTC m=+820.938094030" Oct 11 07:53:53 crc kubenswrapper[5016]: I1011 07:53:53.040978 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:53:53 crc kubenswrapper[5016]: I1011 07:53:53.056934 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" podStartSLOduration=5.349168608 podStartE2EDuration="18.056913329s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:37.663000617 +0000 UTC m=+805.563456553" lastFinishedPulling="2025-10-11 07:53:50.370745318 +0000 UTC m=+818.271201274" observedRunningTime="2025-10-11 07:53:53.055236977 +0000 UTC m=+820.955692923" watchObservedRunningTime="2025-10-11 07:53:53.056913329 +0000 UTC m=+820.957369275" Oct 11 07:53:53 crc kubenswrapper[5016]: I1011 07:53:53.072604 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" podStartSLOduration=5.416281641 podStartE2EDuration="18.072588545s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:37.659297045 +0000 UTC m=+805.559752991" lastFinishedPulling="2025-10-11 07:53:50.315603939 +0000 UTC m=+818.216059895" observedRunningTime="2025-10-11 07:53:53.071896378 +0000 UTC m=+820.972352324" watchObservedRunningTime="2025-10-11 07:53:53.072588545 +0000 UTC m=+820.973044491" Oct 11 07:53:53 crc kubenswrapper[5016]: I1011 07:53:53.089174 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" podStartSLOduration=4.93071705 podStartE2EDuration="17.089155814s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.200317499 +0000 UTC m=+806.100773435" lastFinishedPulling="2025-10-11 07:53:50.358756253 +0000 UTC m=+818.259212199" observedRunningTime="2025-10-11 07:53:53.085458462 +0000 UTC m=+820.985914408" watchObservedRunningTime="2025-10-11 07:53:53.089155814 +0000 UTC m=+820.989611750" Oct 11 07:53:53 crc kubenswrapper[5016]: I1011 07:53:53.105246 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" podStartSLOduration=5.243969485 podStartE2EDuration="18.105207529s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:37.416586104 +0000 UTC m=+805.317042050" lastFinishedPulling="2025-10-11 07:53:50.277824148 +0000 UTC m=+818.178280094" observedRunningTime="2025-10-11 07:53:53.104914142 +0000 UTC m=+821.005370088" watchObservedRunningTime="2025-10-11 07:53:53.105207529 +0000 UTC m=+821.005663475" Oct 11 07:53:53 crc kubenswrapper[5016]: I1011 07:53:53.543499 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4kwr"] Oct 11 07:53:54 crc kubenswrapper[5016]: I1011 07:53:54.007090 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" Oct 11 07:53:55 crc kubenswrapper[5016]: I1011 07:53:55.018069 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4kwr" event={"ID":"138ff19f-4af1-4d14-955a-6356b93cb6dd","Type":"ContainerStarted","Data":"f698e9e7e45fcb72a164e72bc5ad5d6c51a3d799bdc9c6b43e5d8f7752928970"} Oct 11 07:53:55 crc kubenswrapper[5016]: I1011 07:53:55.021104 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-658bdf4b74-5k87v" Oct 11 07:53:55 crc kubenswrapper[5016]: I1011 07:53:55.025843 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84b9b84486-jvtkl" Oct 11 07:53:55 crc kubenswrapper[5016]: I1011 07:53:55.028548 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7ffbcb7588-hdcq6" Oct 11 07:53:55 crc kubenswrapper[5016]: I1011 07:53:55.028603 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7b7fb68549-g5rms" Oct 11 07:53:56 crc kubenswrapper[5016]: I1011 07:53:56.157809 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-85d5d9dd78-8cjvz" Oct 11 07:53:56 crc kubenswrapper[5016]: I1011 07:53:56.226012 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-858f76bbdd-zbqbd" Oct 11 07:53:56 crc kubenswrapper[5016]: I1011 07:53:56.354958 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5f67fbc655-4t8kd" Oct 11 07:53:56 crc kubenswrapper[5016]: I1011 07:53:56.418220 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5df598886f-2cwzg" Oct 11 07:53:56 crc kubenswrapper[5016]: I1011 07:53:56.581340 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69fdcfc5f5-2mll5" Oct 11 07:53:56 crc kubenswrapper[5016]: I1011 07:53:56.606430 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-79df5fb58c-pgsds" Oct 11 07:53:56 crc kubenswrapper[5016]: I1011 07:53:56.693637 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-79d585cb66-27vlc" Oct 11 07:53:56 crc kubenswrapper[5016]: I1011 07:53:56.815136 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-67cfc6749b-rdhd7" Oct 11 07:53:56 crc kubenswrapper[5016]: I1011 07:53:56.850132 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7f554bff7b-5tl82" Oct 11 07:53:57 crc kubenswrapper[5016]: I1011 07:53:57.253434 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5956dffb7br27k2" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.044993 5016 generic.go:334] "Generic (PLEG): container finished" podID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerID="77f10bb24aa9923a60c854b15499768a7281247f2211811579a5b72c3f0b579b" exitCode=0 Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.045127 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4kwr" event={"ID":"138ff19f-4af1-4d14-955a-6356b93cb6dd","Type":"ContainerDied","Data":"77f10bb24aa9923a60c854b15499768a7281247f2211811579a5b72c3f0b579b"} Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.048576 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" event={"ID":"971b1fdb-ddf2-4662-b77f-e3b55ac12de7","Type":"ContainerStarted","Data":"1d1669f693f0c2d6f71658061ae6d21edfaa36f6624733b39baa94795bdf39e9"} Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.049221 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.053765 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" event={"ID":"b53501aa-b72c-457d-ad20-1f57abd81645","Type":"ContainerStarted","Data":"900407fd12ae4610e2e9a918506aca40f468f67d801885d857404d8cbff0d3e5"} Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.053977 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.056567 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" event={"ID":"314a9915-c5b2-45c6-ad73-17bcf42d80cc","Type":"ContainerStarted","Data":"95d21fe5d4845ded55e0db1436fc7425a8bfe3ff448ca79928f6fcd6dee2e1ee"} Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.057076 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.060242 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" event={"ID":"e2954d6f-57ba-49c4-ac53-7aa4600cf1b2","Type":"ContainerStarted","Data":"b078ed1851412dcbf954fe07ee553471fccba8375e04a8950ada5ee69f499365"} Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.060407 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.063573 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" event={"ID":"cfabcb8e-bad0-4179-81d3-0d6c2a874793","Type":"ContainerStarted","Data":"51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4"} Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.064122 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.066336 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" event={"ID":"f36d2ba0-eaa2-48d4-8367-3b718a86b54a","Type":"ContainerStarted","Data":"39830542c90f20cba0429f64605fdb3f21bc932639b3e0e14c9d846bce60d080"} Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.066441 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.073137 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" event={"ID":"c146e268-2093-47e5-aaa1-824de389d97a","Type":"ContainerStarted","Data":"e30c4935c946ab0131db8d2b17319f4ba7eb1dd154c2be63e97325e3f0020789"} Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.073635 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.098823 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" podStartSLOduration=3.625194795 podStartE2EDuration="22.098799596s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.310555196 +0000 UTC m=+806.211011142" lastFinishedPulling="2025-10-11 07:53:56.784159997 +0000 UTC m=+824.684615943" observedRunningTime="2025-10-11 07:53:58.090475931 +0000 UTC m=+825.990931907" watchObservedRunningTime="2025-10-11 07:53:58.098799596 +0000 UTC m=+825.999255582" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.114866 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" podStartSLOduration=4.687769017 podStartE2EDuration="23.114840091s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.329994964 +0000 UTC m=+806.230450910" lastFinishedPulling="2025-10-11 07:53:56.757066028 +0000 UTC m=+824.657521984" observedRunningTime="2025-10-11 07:53:58.104694601 +0000 UTC m=+826.005150587" watchObservedRunningTime="2025-10-11 07:53:58.114840091 +0000 UTC m=+826.015296067" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.123917 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" podStartSLOduration=4.684124077 podStartE2EDuration="23.123902555s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.317291221 +0000 UTC m=+806.217747167" lastFinishedPulling="2025-10-11 07:53:56.757069699 +0000 UTC m=+824.657525645" observedRunningTime="2025-10-11 07:53:58.119975978 +0000 UTC m=+826.020431934" watchObservedRunningTime="2025-10-11 07:53:58.123902555 +0000 UTC m=+826.024358501" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.142995 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" podStartSLOduration=4.717173681 podStartE2EDuration="23.142968954s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.330085496 +0000 UTC m=+806.230541442" lastFinishedPulling="2025-10-11 07:53:56.755880749 +0000 UTC m=+824.656336715" observedRunningTime="2025-10-11 07:53:58.140052832 +0000 UTC m=+826.040508798" watchObservedRunningTime="2025-10-11 07:53:58.142968954 +0000 UTC m=+826.043424910" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.164174 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" podStartSLOduration=4.724325577 podStartE2EDuration="23.164152566s" podCreationTimestamp="2025-10-11 07:53:35 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.316163604 +0000 UTC m=+806.216619550" lastFinishedPulling="2025-10-11 07:53:56.755990593 +0000 UTC m=+824.656446539" observedRunningTime="2025-10-11 07:53:58.157127763 +0000 UTC m=+826.057583739" watchObservedRunningTime="2025-10-11 07:53:58.164152566 +0000 UTC m=+826.064608522" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.181325 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" podStartSLOduration=3.738315242 podStartE2EDuration="22.181306179s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.312035252 +0000 UTC m=+806.212491198" lastFinishedPulling="2025-10-11 07:53:56.755026189 +0000 UTC m=+824.655482135" observedRunningTime="2025-10-11 07:53:58.175437004 +0000 UTC m=+826.075892980" watchObservedRunningTime="2025-10-11 07:53:58.181306179 +0000 UTC m=+826.081762135" Oct 11 07:53:58 crc kubenswrapper[5016]: I1011 07:53:58.193345 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" podStartSLOduration=3.769170033 podStartE2EDuration="22.193324775s" podCreationTimestamp="2025-10-11 07:53:36 +0000 UTC" firstStartedPulling="2025-10-11 07:53:38.310686929 +0000 UTC m=+806.211142875" lastFinishedPulling="2025-10-11 07:53:56.734841681 +0000 UTC m=+824.635297617" observedRunningTime="2025-10-11 07:53:58.189017409 +0000 UTC m=+826.089473375" watchObservedRunningTime="2025-10-11 07:53:58.193324775 +0000 UTC m=+826.093780721" Oct 11 07:53:59 crc kubenswrapper[5016]: I1011 07:53:59.088354 5016 generic.go:334] "Generic (PLEG): container finished" podID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerID="b92ba18da7a516d45fbc372f04a0e98e1636dc207b25c45ecc9fd7b3249a3f82" exitCode=0 Oct 11 07:53:59 crc kubenswrapper[5016]: I1011 07:53:59.088533 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4kwr" event={"ID":"138ff19f-4af1-4d14-955a-6356b93cb6dd","Type":"ContainerDied","Data":"b92ba18da7a516d45fbc372f04a0e98e1636dc207b25c45ecc9fd7b3249a3f82"} Oct 11 07:54:00 crc kubenswrapper[5016]: I1011 07:54:00.106572 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4kwr" event={"ID":"138ff19f-4af1-4d14-955a-6356b93cb6dd","Type":"ContainerStarted","Data":"22a656add73c40b3a3cd843db9c1658e4d0ac2cc59940b37346e668d96438b33"} Oct 11 07:54:00 crc kubenswrapper[5016]: I1011 07:54:00.124385 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4kwr" podStartSLOduration=6.569109265 podStartE2EDuration="8.124367025s" podCreationTimestamp="2025-10-11 07:53:52 +0000 UTC" firstStartedPulling="2025-10-11 07:53:58.047739697 +0000 UTC m=+825.948195653" lastFinishedPulling="2025-10-11 07:53:59.602997427 +0000 UTC m=+827.503453413" observedRunningTime="2025-10-11 07:54:00.123608137 +0000 UTC m=+828.024064083" watchObservedRunningTime="2025-10-11 07:54:00.124367025 +0000 UTC m=+828.024822981" Oct 11 07:54:03 crc kubenswrapper[5016]: I1011 07:54:03.041250 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:54:03 crc kubenswrapper[5016]: I1011 07:54:03.041781 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:54:03 crc kubenswrapper[5016]: I1011 07:54:03.109014 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:54:06 crc kubenswrapper[5016]: I1011 07:54:06.579997 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-656bcbd775-q4rcq" Oct 11 07:54:06 crc kubenswrapper[5016]: I1011 07:54:06.592281 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-9c5c78d49-d5p22" Oct 11 07:54:06 crc kubenswrapper[5016]: I1011 07:54:06.595240 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55b6b7c7b8-lcm96" Oct 11 07:54:06 crc kubenswrapper[5016]: I1011 07:54:06.688787 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-f9fb45f8f-txvhv" Oct 11 07:54:06 crc kubenswrapper[5016]: I1011 07:54:06.769603 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-db6d7f97b-4h6v7" Oct 11 07:54:06 crc kubenswrapper[5016]: I1011 07:54:06.779357 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-68b6c87b68-7nrt2" Oct 11 07:54:06 crc kubenswrapper[5016]: I1011 07:54:06.831791 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" Oct 11 07:54:13 crc kubenswrapper[5016]: I1011 07:54:13.097926 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:54:13 crc kubenswrapper[5016]: I1011 07:54:13.152026 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4kwr"] Oct 11 07:54:13 crc kubenswrapper[5016]: I1011 07:54:13.216697 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p4kwr" podUID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerName="registry-server" containerID="cri-o://22a656add73c40b3a3cd843db9c1658e4d0ac2cc59940b37346e668d96438b33" gracePeriod=2 Oct 11 07:54:14 crc kubenswrapper[5016]: I1011 07:54:14.227204 5016 generic.go:334] "Generic (PLEG): container finished" podID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerID="22a656add73c40b3a3cd843db9c1658e4d0ac2cc59940b37346e668d96438b33" exitCode=0 Oct 11 07:54:14 crc kubenswrapper[5016]: I1011 07:54:14.227251 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4kwr" event={"ID":"138ff19f-4af1-4d14-955a-6356b93cb6dd","Type":"ContainerDied","Data":"22a656add73c40b3a3cd843db9c1658e4d0ac2cc59940b37346e668d96438b33"} Oct 11 07:54:17 crc kubenswrapper[5016]: I1011 07:54:17.946708 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.103817 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkc5p\" (UniqueName: \"kubernetes.io/projected/138ff19f-4af1-4d14-955a-6356b93cb6dd-kube-api-access-dkc5p\") pod \"138ff19f-4af1-4d14-955a-6356b93cb6dd\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.103893 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-utilities\") pod \"138ff19f-4af1-4d14-955a-6356b93cb6dd\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.103949 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-catalog-content\") pod \"138ff19f-4af1-4d14-955a-6356b93cb6dd\" (UID: \"138ff19f-4af1-4d14-955a-6356b93cb6dd\") " Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.104680 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-utilities" (OuterVolumeSpecName: "utilities") pod "138ff19f-4af1-4d14-955a-6356b93cb6dd" (UID: "138ff19f-4af1-4d14-955a-6356b93cb6dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.109911 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/138ff19f-4af1-4d14-955a-6356b93cb6dd-kube-api-access-dkc5p" (OuterVolumeSpecName: "kube-api-access-dkc5p") pod "138ff19f-4af1-4d14-955a-6356b93cb6dd" (UID: "138ff19f-4af1-4d14-955a-6356b93cb6dd"). InnerVolumeSpecName "kube-api-access-dkc5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.128882 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "138ff19f-4af1-4d14-955a-6356b93cb6dd" (UID: "138ff19f-4af1-4d14-955a-6356b93cb6dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.205137 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.205391 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/138ff19f-4af1-4d14-955a-6356b93cb6dd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.205404 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkc5p\" (UniqueName: \"kubernetes.io/projected/138ff19f-4af1-4d14-955a-6356b93cb6dd-kube-api-access-dkc5p\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.257023 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4kwr" event={"ID":"138ff19f-4af1-4d14-955a-6356b93cb6dd","Type":"ContainerDied","Data":"f698e9e7e45fcb72a164e72bc5ad5d6c51a3d799bdc9c6b43e5d8f7752928970"} Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.257092 5016 scope.go:117] "RemoveContainer" containerID="22a656add73c40b3a3cd843db9c1658e4d0ac2cc59940b37346e668d96438b33" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.257139 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4kwr" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.275634 5016 scope.go:117] "RemoveContainer" containerID="b92ba18da7a516d45fbc372f04a0e98e1636dc207b25c45ecc9fd7b3249a3f82" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.296026 5016 scope.go:117] "RemoveContainer" containerID="77f10bb24aa9923a60c854b15499768a7281247f2211811579a5b72c3f0b579b" Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.296916 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4kwr"] Oct 11 07:54:18 crc kubenswrapper[5016]: I1011 07:54:18.302519 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4kwr"] Oct 11 07:54:19 crc kubenswrapper[5016]: I1011 07:54:19.147147 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="138ff19f-4af1-4d14-955a-6356b93cb6dd" path="/var/lib/kubelet/pods/138ff19f-4af1-4d14-955a-6356b93cb6dd/volumes" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.522256 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-9nf4f"] Oct 11 07:54:22 crc kubenswrapper[5016]: E1011 07:54:22.522896 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerName="extract-content" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.522912 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerName="extract-content" Oct 11 07:54:22 crc kubenswrapper[5016]: E1011 07:54:22.522928 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerName="extract-utilities" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.522935 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerName="extract-utilities" Oct 11 07:54:22 crc kubenswrapper[5016]: E1011 07:54:22.522956 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerName="registry-server" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.522963 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerName="registry-server" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.523117 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="138ff19f-4af1-4d14-955a-6356b93cb6dd" containerName="registry-server" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.523965 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.526807 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.527136 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.527418 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.527515 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5dpxq" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.533544 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-9nf4f"] Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.581796 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45kq2\" (UniqueName: \"kubernetes.io/projected/94aa76b7-b531-4632-b315-ce40f9a54e06-kube-api-access-45kq2\") pod \"dnsmasq-dns-7bfcb9d745-9nf4f\" (UID: \"94aa76b7-b531-4632-b315-ce40f9a54e06\") " pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.581886 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94aa76b7-b531-4632-b315-ce40f9a54e06-config\") pod \"dnsmasq-dns-7bfcb9d745-9nf4f\" (UID: \"94aa76b7-b531-4632-b315-ce40f9a54e06\") " pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.613872 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-9b5sb"] Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.615479 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.618839 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.621504 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-9b5sb"] Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.682928 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-dns-svc\") pod \"dnsmasq-dns-758b79db4c-9b5sb\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.682993 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45kq2\" (UniqueName: \"kubernetes.io/projected/94aa76b7-b531-4632-b315-ce40f9a54e06-kube-api-access-45kq2\") pod \"dnsmasq-dns-7bfcb9d745-9nf4f\" (UID: \"94aa76b7-b531-4632-b315-ce40f9a54e06\") " pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.683044 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-config\") pod \"dnsmasq-dns-758b79db4c-9b5sb\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.683065 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszlm\" (UniqueName: \"kubernetes.io/projected/089d898a-11ca-4986-8dee-2efa6b4dd050-kube-api-access-gszlm\") pod \"dnsmasq-dns-758b79db4c-9b5sb\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.683087 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94aa76b7-b531-4632-b315-ce40f9a54e06-config\") pod \"dnsmasq-dns-7bfcb9d745-9nf4f\" (UID: \"94aa76b7-b531-4632-b315-ce40f9a54e06\") " pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.683951 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94aa76b7-b531-4632-b315-ce40f9a54e06-config\") pod \"dnsmasq-dns-7bfcb9d745-9nf4f\" (UID: \"94aa76b7-b531-4632-b315-ce40f9a54e06\") " pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.706713 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45kq2\" (UniqueName: \"kubernetes.io/projected/94aa76b7-b531-4632-b315-ce40f9a54e06-kube-api-access-45kq2\") pod \"dnsmasq-dns-7bfcb9d745-9nf4f\" (UID: \"94aa76b7-b531-4632-b315-ce40f9a54e06\") " pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.785150 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-dns-svc\") pod \"dnsmasq-dns-758b79db4c-9b5sb\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.786044 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-dns-svc\") pod \"dnsmasq-dns-758b79db4c-9b5sb\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.786338 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-config\") pod \"dnsmasq-dns-758b79db4c-9b5sb\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.786450 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gszlm\" (UniqueName: \"kubernetes.io/projected/089d898a-11ca-4986-8dee-2efa6b4dd050-kube-api-access-gszlm\") pod \"dnsmasq-dns-758b79db4c-9b5sb\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.787052 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-config\") pod \"dnsmasq-dns-758b79db4c-9b5sb\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.801594 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gszlm\" (UniqueName: \"kubernetes.io/projected/089d898a-11ca-4986-8dee-2efa6b4dd050-kube-api-access-gszlm\") pod \"dnsmasq-dns-758b79db4c-9b5sb\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.904823 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:22 crc kubenswrapper[5016]: I1011 07:54:22.932757 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:23 crc kubenswrapper[5016]: I1011 07:54:23.333195 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-9nf4f"] Oct 11 07:54:23 crc kubenswrapper[5016]: W1011 07:54:23.338685 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94aa76b7_b531_4632_b315_ce40f9a54e06.slice/crio-89d629322a42900dc258666ed4a88a69e5548ff3a58b698606af795694b7cbea WatchSource:0}: Error finding container 89d629322a42900dc258666ed4a88a69e5548ff3a58b698606af795694b7cbea: Status 404 returned error can't find the container with id 89d629322a42900dc258666ed4a88a69e5548ff3a58b698606af795694b7cbea Oct 11 07:54:23 crc kubenswrapper[5016]: I1011 07:54:23.434636 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-9b5sb"] Oct 11 07:54:23 crc kubenswrapper[5016]: W1011 07:54:23.438965 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod089d898a_11ca_4986_8dee_2efa6b4dd050.slice/crio-19505dc4dfd53ce1bf0331c26fb66e5de3d2c33ccb599d1dcd55a94d68f6482c WatchSource:0}: Error finding container 19505dc4dfd53ce1bf0331c26fb66e5de3d2c33ccb599d1dcd55a94d68f6482c: Status 404 returned error can't find the container with id 19505dc4dfd53ce1bf0331c26fb66e5de3d2c33ccb599d1dcd55a94d68f6482c Oct 11 07:54:24 crc kubenswrapper[5016]: I1011 07:54:24.313133 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" event={"ID":"089d898a-11ca-4986-8dee-2efa6b4dd050","Type":"ContainerStarted","Data":"19505dc4dfd53ce1bf0331c26fb66e5de3d2c33ccb599d1dcd55a94d68f6482c"} Oct 11 07:54:24 crc kubenswrapper[5016]: I1011 07:54:24.317806 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" event={"ID":"94aa76b7-b531-4632-b315-ce40f9a54e06","Type":"ContainerStarted","Data":"89d629322a42900dc258666ed4a88a69e5548ff3a58b698606af795694b7cbea"} Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.713035 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-9b5sb"] Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.738849 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-644597f84c-cd8jm"] Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.740037 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.750086 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-cd8jm"] Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.830153 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx5mv\" (UniqueName: \"kubernetes.io/projected/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-kube-api-access-jx5mv\") pod \"dnsmasq-dns-644597f84c-cd8jm\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.830224 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-config\") pod \"dnsmasq-dns-644597f84c-cd8jm\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.830322 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-dns-svc\") pod \"dnsmasq-dns-644597f84c-cd8jm\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.931259 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx5mv\" (UniqueName: \"kubernetes.io/projected/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-kube-api-access-jx5mv\") pod \"dnsmasq-dns-644597f84c-cd8jm\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.931340 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-config\") pod \"dnsmasq-dns-644597f84c-cd8jm\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.931388 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-dns-svc\") pod \"dnsmasq-dns-644597f84c-cd8jm\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.932525 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-dns-svc\") pod \"dnsmasq-dns-644597f84c-cd8jm\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.933133 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-config\") pod \"dnsmasq-dns-644597f84c-cd8jm\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:25 crc kubenswrapper[5016]: I1011 07:54:25.963110 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx5mv\" (UniqueName: \"kubernetes.io/projected/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-kube-api-access-jx5mv\") pod \"dnsmasq-dns-644597f84c-cd8jm\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.066094 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-9nf4f"] Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.099368 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77597f887-m66hn"] Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.104358 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.108301 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.119476 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77597f887-m66hn"] Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.137270 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-config\") pod \"dnsmasq-dns-77597f887-m66hn\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.137344 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-dns-svc\") pod \"dnsmasq-dns-77597f887-m66hn\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.137367 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpl7z\" (UniqueName: \"kubernetes.io/projected/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-kube-api-access-dpl7z\") pod \"dnsmasq-dns-77597f887-m66hn\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.238302 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpl7z\" (UniqueName: \"kubernetes.io/projected/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-kube-api-access-dpl7z\") pod \"dnsmasq-dns-77597f887-m66hn\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.238474 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-config\") pod \"dnsmasq-dns-77597f887-m66hn\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.238521 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-dns-svc\") pod \"dnsmasq-dns-77597f887-m66hn\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.239320 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-dns-svc\") pod \"dnsmasq-dns-77597f887-m66hn\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.241824 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-config\") pod \"dnsmasq-dns-77597f887-m66hn\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.278850 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpl7z\" (UniqueName: \"kubernetes.io/projected/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-kube-api-access-dpl7z\") pod \"dnsmasq-dns-77597f887-m66hn\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.464105 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.903349 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.904521 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.908123 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.908228 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.910056 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-665lw" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.910226 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.910387 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.910585 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.911131 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 11 07:54:26 crc kubenswrapper[5016]: I1011 07:54:26.916944 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.048556 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.048605 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.048624 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-config-data\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.048676 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.048695 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67a018eb-911e-4491-9dae-a1dfb3172e05-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.048762 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.048780 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9tmz\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-kube-api-access-c9tmz\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.048850 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67a018eb-911e-4491-9dae-a1dfb3172e05-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.049806 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.049836 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.049868 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151120 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-config-data\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151162 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151180 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67a018eb-911e-4491-9dae-a1dfb3172e05-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151200 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151218 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9tmz\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-kube-api-access-c9tmz\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151243 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67a018eb-911e-4491-9dae-a1dfb3172e05-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151289 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151305 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151333 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151374 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151399 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.151667 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.152224 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.152291 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.152808 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-config-data\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.152840 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.154782 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.157574 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.158392 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67a018eb-911e-4491-9dae-a1dfb3172e05-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.158599 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67a018eb-911e-4491-9dae-a1dfb3172e05-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.158612 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.174190 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9tmz\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-kube-api-access-c9tmz\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.174809 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.218318 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.220289 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.223948 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.224151 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.224209 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.224338 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.224532 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.224642 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.229128 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5mm85" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.235834 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.271226 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.354852 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2fkv\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-kube-api-access-x2fkv\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.355112 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bae29196-1d16-4563-9e7d-0981a96a352f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.355224 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.355313 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.355434 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.355533 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bae29196-1d16-4563-9e7d-0981a96a352f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.355632 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.355781 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.355909 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.356041 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.356153 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457394 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457439 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457465 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457490 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457510 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2fkv\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-kube-api-access-x2fkv\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457529 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bae29196-1d16-4563-9e7d-0981a96a352f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457558 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457578 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457607 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457622 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bae29196-1d16-4563-9e7d-0981a96a352f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.457642 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.458959 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.459119 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.459264 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.459523 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.459875 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.460771 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.465521 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bae29196-1d16-4563-9e7d-0981a96a352f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.465557 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.465803 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.466170 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bae29196-1d16-4563-9e7d-0981a96a352f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.477030 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2fkv\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-kube-api-access-x2fkv\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.483681 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:27 crc kubenswrapper[5016]: I1011 07:54:27.554300 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.544725 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.546192 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.553296 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.554575 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.555070 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.555245 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.555782 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mr8lk" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.558565 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.570779 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.676119 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eee46d88-d3cf-428a-9808-f9bef1f292b7-config-data-default\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.676184 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eee46d88-d3cf-428a-9808-f9bef1f292b7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.676207 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eee46d88-d3cf-428a-9808-f9bef1f292b7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.676234 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zkzm\" (UniqueName: \"kubernetes.io/projected/eee46d88-d3cf-428a-9808-f9bef1f292b7-kube-api-access-7zkzm\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.676266 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/eee46d88-d3cf-428a-9808-f9bef1f292b7-secrets\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.676352 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eee46d88-d3cf-428a-9808-f9bef1f292b7-kolla-config\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.676536 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eee46d88-d3cf-428a-9808-f9bef1f292b7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.676568 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee46d88-d3cf-428a-9808-f9bef1f292b7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.676608 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.778254 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eee46d88-d3cf-428a-9808-f9bef1f292b7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.778297 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee46d88-d3cf-428a-9808-f9bef1f292b7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.778319 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.778409 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eee46d88-d3cf-428a-9808-f9bef1f292b7-config-data-default\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.778452 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eee46d88-d3cf-428a-9808-f9bef1f292b7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.778487 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eee46d88-d3cf-428a-9808-f9bef1f292b7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.778522 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zkzm\" (UniqueName: \"kubernetes.io/projected/eee46d88-d3cf-428a-9808-f9bef1f292b7-kube-api-access-7zkzm\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.778548 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/eee46d88-d3cf-428a-9808-f9bef1f292b7-secrets\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.778594 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eee46d88-d3cf-428a-9808-f9bef1f292b7-kolla-config\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.779120 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.779774 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/eee46d88-d3cf-428a-9808-f9bef1f292b7-kolla-config\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.779852 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/eee46d88-d3cf-428a-9808-f9bef1f292b7-config-data-default\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.780024 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/eee46d88-d3cf-428a-9808-f9bef1f292b7-config-data-generated\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.781125 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eee46d88-d3cf-428a-9808-f9bef1f292b7-operator-scripts\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.783937 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee46d88-d3cf-428a-9808-f9bef1f292b7-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.798260 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/eee46d88-d3cf-428a-9808-f9bef1f292b7-secrets\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.798539 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/eee46d88-d3cf-428a-9808-f9bef1f292b7-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.803522 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zkzm\" (UniqueName: \"kubernetes.io/projected/eee46d88-d3cf-428a-9808-f9bef1f292b7-kube-api-access-7zkzm\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.806324 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"eee46d88-d3cf-428a-9808-f9bef1f292b7\") " pod="openstack/openstack-galera-0" Oct 11 07:54:28 crc kubenswrapper[5016]: I1011 07:54:28.892301 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.887409 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.892734 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.897216 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.898350 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.898806 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4hkx4" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.898859 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.898758 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.995288 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.995374 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d275b2c-beec-4696-a60f-6a31245767bb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.995420 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d275b2c-beec-4696-a60f-6a31245767bb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.995442 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d275b2c-beec-4696-a60f-6a31245767bb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.995458 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d275b2c-beec-4696-a60f-6a31245767bb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.995497 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d275b2c-beec-4696-a60f-6a31245767bb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.995540 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44z9c\" (UniqueName: \"kubernetes.io/projected/9d275b2c-beec-4696-a60f-6a31245767bb-kube-api-access-44z9c\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.997167 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9d275b2c-beec-4696-a60f-6a31245767bb-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:29 crc kubenswrapper[5016]: I1011 07:54:29.997233 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d275b2c-beec-4696-a60f-6a31245767bb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.100546 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9d275b2c-beec-4696-a60f-6a31245767bb-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.100919 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d275b2c-beec-4696-a60f-6a31245767bb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.100963 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.101007 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d275b2c-beec-4696-a60f-6a31245767bb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.101025 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d275b2c-beec-4696-a60f-6a31245767bb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.101047 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d275b2c-beec-4696-a60f-6a31245767bb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.101065 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d275b2c-beec-4696-a60f-6a31245767bb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.101083 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d275b2c-beec-4696-a60f-6a31245767bb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.101128 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44z9c\" (UniqueName: \"kubernetes.io/projected/9d275b2c-beec-4696-a60f-6a31245767bb-kube-api-access-44z9c\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.101366 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.101714 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d275b2c-beec-4696-a60f-6a31245767bb-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.102369 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d275b2c-beec-4696-a60f-6a31245767bb-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.102826 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d275b2c-beec-4696-a60f-6a31245767bb-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.102875 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d275b2c-beec-4696-a60f-6a31245767bb-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.105508 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d275b2c-beec-4696-a60f-6a31245767bb-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.105786 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d275b2c-beec-4696-a60f-6a31245767bb-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.118177 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/9d275b2c-beec-4696-a60f-6a31245767bb-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.119060 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44z9c\" (UniqueName: \"kubernetes.io/projected/9d275b2c-beec-4696-a60f-6a31245767bb-kube-api-access-44z9c\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.124942 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d275b2c-beec-4696-a60f-6a31245767bb\") " pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.188576 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.189576 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.191810 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.192101 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-245cp" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.192283 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.206165 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.229099 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.303790 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4gf7\" (UniqueName: \"kubernetes.io/projected/6811d5d2-c174-41f6-a397-0bc4133297e9-kube-api-access-d4gf7\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.303863 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6811d5d2-c174-41f6-a397-0bc4133297e9-config-data\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.304116 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6811d5d2-c174-41f6-a397-0bc4133297e9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.304316 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6811d5d2-c174-41f6-a397-0bc4133297e9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.304430 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6811d5d2-c174-41f6-a397-0bc4133297e9-kolla-config\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.405332 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6811d5d2-c174-41f6-a397-0bc4133297e9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.405394 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6811d5d2-c174-41f6-a397-0bc4133297e9-kolla-config\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.405433 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4gf7\" (UniqueName: \"kubernetes.io/projected/6811d5d2-c174-41f6-a397-0bc4133297e9-kube-api-access-d4gf7\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.405460 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6811d5d2-c174-41f6-a397-0bc4133297e9-config-data\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.405533 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6811d5d2-c174-41f6-a397-0bc4133297e9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.406361 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6811d5d2-c174-41f6-a397-0bc4133297e9-config-data\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.407517 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6811d5d2-c174-41f6-a397-0bc4133297e9-kolla-config\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.408726 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6811d5d2-c174-41f6-a397-0bc4133297e9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.411645 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6811d5d2-c174-41f6-a397-0bc4133297e9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.423928 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4gf7\" (UniqueName: \"kubernetes.io/projected/6811d5d2-c174-41f6-a397-0bc4133297e9-kube-api-access-d4gf7\") pod \"memcached-0\" (UID: \"6811d5d2-c174-41f6-a397-0bc4133297e9\") " pod="openstack/memcached-0" Oct 11 07:54:30 crc kubenswrapper[5016]: I1011 07:54:30.507117 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 11 07:54:32 crc kubenswrapper[5016]: I1011 07:54:32.087771 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 07:54:32 crc kubenswrapper[5016]: I1011 07:54:32.089042 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 07:54:32 crc kubenswrapper[5016]: I1011 07:54:32.090991 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h2kh8" Oct 11 07:54:32 crc kubenswrapper[5016]: I1011 07:54:32.108544 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 07:54:32 crc kubenswrapper[5016]: I1011 07:54:32.239924 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbq2x\" (UniqueName: \"kubernetes.io/projected/d360ab05-372a-4b41-8abb-2c2b4257123c-kube-api-access-rbq2x\") pod \"kube-state-metrics-0\" (UID: \"d360ab05-372a-4b41-8abb-2c2b4257123c\") " pod="openstack/kube-state-metrics-0" Oct 11 07:54:32 crc kubenswrapper[5016]: I1011 07:54:32.341770 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbq2x\" (UniqueName: \"kubernetes.io/projected/d360ab05-372a-4b41-8abb-2c2b4257123c-kube-api-access-rbq2x\") pod \"kube-state-metrics-0\" (UID: \"d360ab05-372a-4b41-8abb-2c2b4257123c\") " pod="openstack/kube-state-metrics-0" Oct 11 07:54:32 crc kubenswrapper[5016]: I1011 07:54:32.358509 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbq2x\" (UniqueName: \"kubernetes.io/projected/d360ab05-372a-4b41-8abb-2c2b4257123c-kube-api-access-rbq2x\") pod \"kube-state-metrics-0\" (UID: \"d360ab05-372a-4b41-8abb-2c2b4257123c\") " pod="openstack/kube-state-metrics-0" Oct 11 07:54:32 crc kubenswrapper[5016]: I1011 07:54:32.443377 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.693141 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-db7s5"] Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.694789 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.697938 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.698232 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jrgbw" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.703492 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-db7s5"] Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.704927 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.733301 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-w5nkt"] Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.735030 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.748148 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w5nkt"] Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.795610 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f3f361-bd63-4b18-afd7-3c64169af0a8-ovn-controller-tls-certs\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.795678 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnddr\" (UniqueName: \"kubernetes.io/projected/69f3f361-bd63-4b18-afd7-3c64169af0a8-kube-api-access-nnddr\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.795705 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-var-lib\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.795740 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f3f361-bd63-4b18-afd7-3c64169af0a8-combined-ca-bundle\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.795764 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69f3f361-bd63-4b18-afd7-3c64169af0a8-var-log-ovn\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.795783 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq9lz\" (UniqueName: \"kubernetes.io/projected/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-kube-api-access-bq9lz\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.795805 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-var-log\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.796009 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-etc-ovs\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.796076 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69f3f361-bd63-4b18-afd7-3c64169af0a8-var-run\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.796256 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69f3f361-bd63-4b18-afd7-3c64169af0a8-scripts\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.796288 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-scripts\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.796333 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-var-run\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.796385 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69f3f361-bd63-4b18-afd7-3c64169af0a8-var-run-ovn\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897510 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69f3f361-bd63-4b18-afd7-3c64169af0a8-scripts\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897565 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-scripts\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897599 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-var-run\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897633 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69f3f361-bd63-4b18-afd7-3c64169af0a8-var-run-ovn\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897697 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f3f361-bd63-4b18-afd7-3c64169af0a8-ovn-controller-tls-certs\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897743 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnddr\" (UniqueName: \"kubernetes.io/projected/69f3f361-bd63-4b18-afd7-3c64169af0a8-kube-api-access-nnddr\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897778 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-var-lib\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897817 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f3f361-bd63-4b18-afd7-3c64169af0a8-combined-ca-bundle\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897859 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69f3f361-bd63-4b18-afd7-3c64169af0a8-var-log-ovn\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897896 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq9lz\" (UniqueName: \"kubernetes.io/projected/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-kube-api-access-bq9lz\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897928 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-var-log\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897957 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-etc-ovs\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.897980 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69f3f361-bd63-4b18-afd7-3c64169af0a8-var-run\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.898573 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/69f3f361-bd63-4b18-afd7-3c64169af0a8-var-run\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.898633 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-var-lib\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.898885 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/69f3f361-bd63-4b18-afd7-3c64169af0a8-var-run-ovn\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.899000 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-var-run\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.899572 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/69f3f361-bd63-4b18-afd7-3c64169af0a8-var-log-ovn\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.899784 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-scripts\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.899799 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69f3f361-bd63-4b18-afd7-3c64169af0a8-scripts\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.899900 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-var-log\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.899937 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-etc-ovs\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.904925 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/69f3f361-bd63-4b18-afd7-3c64169af0a8-ovn-controller-tls-certs\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.905087 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f3f361-bd63-4b18-afd7-3c64169af0a8-combined-ca-bundle\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.915352 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnddr\" (UniqueName: \"kubernetes.io/projected/69f3f361-bd63-4b18-afd7-3c64169af0a8-kube-api-access-nnddr\") pod \"ovn-controller-db7s5\" (UID: \"69f3f361-bd63-4b18-afd7-3c64169af0a8\") " pod="openstack/ovn-controller-db7s5" Oct 11 07:54:35 crc kubenswrapper[5016]: I1011 07:54:35.919281 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq9lz\" (UniqueName: \"kubernetes.io/projected/38a1ec3b-0cfb-4fdf-bcba-a434cf65a726-kube-api-access-bq9lz\") pod \"ovn-controller-ovs-w5nkt\" (UID: \"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726\") " pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.052328 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-db7s5" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.071145 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.598891 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.608621 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.613000 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.613322 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.613437 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.613628 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-hn72x" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.613784 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.620037 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.712335 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/775dfe20-0fff-42b7-863a-76e8deb52526-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.712760 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775dfe20-0fff-42b7-863a-76e8deb52526-config\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.712790 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775dfe20-0fff-42b7-863a-76e8deb52526-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.712819 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddb9w\" (UniqueName: \"kubernetes.io/projected/775dfe20-0fff-42b7-863a-76e8deb52526-kube-api-access-ddb9w\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.712869 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.712893 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/775dfe20-0fff-42b7-863a-76e8deb52526-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.712960 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/775dfe20-0fff-42b7-863a-76e8deb52526-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.712991 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/775dfe20-0fff-42b7-863a-76e8deb52526-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.814787 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.814843 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/775dfe20-0fff-42b7-863a-76e8deb52526-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.814920 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/775dfe20-0fff-42b7-863a-76e8deb52526-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.814949 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/775dfe20-0fff-42b7-863a-76e8deb52526-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.815018 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/775dfe20-0fff-42b7-863a-76e8deb52526-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.815042 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775dfe20-0fff-42b7-863a-76e8deb52526-config\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.815066 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775dfe20-0fff-42b7-863a-76e8deb52526-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.815089 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddb9w\" (UniqueName: \"kubernetes.io/projected/775dfe20-0fff-42b7-863a-76e8deb52526-kube-api-access-ddb9w\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.815234 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.817852 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/775dfe20-0fff-42b7-863a-76e8deb52526-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.818293 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/775dfe20-0fff-42b7-863a-76e8deb52526-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.818311 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775dfe20-0fff-42b7-863a-76e8deb52526-config\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.822323 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/775dfe20-0fff-42b7-863a-76e8deb52526-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.822891 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/775dfe20-0fff-42b7-863a-76e8deb52526-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.824011 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/775dfe20-0fff-42b7-863a-76e8deb52526-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.831290 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddb9w\" (UniqueName: \"kubernetes.io/projected/775dfe20-0fff-42b7-863a-76e8deb52526-kube-api-access-ddb9w\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.842996 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"775dfe20-0fff-42b7-863a-76e8deb52526\") " pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:36 crc kubenswrapper[5016]: I1011 07:54:36.997414 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:37 crc kubenswrapper[5016]: I1011 07:54:37.059480 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-cd8jm"] Oct 11 07:54:37 crc kubenswrapper[5016]: I1011 07:54:37.123927 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:54:37 crc kubenswrapper[5016]: I1011 07:54:37.124007 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:54:37 crc kubenswrapper[5016]: W1011 07:54:37.428543 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c1afbf6_5654_4f6f_b4c3_70fc041b803a.slice/crio-b8e51aeafea717937e3d72c0b02062acf48f5cbd695522061f0664d763ac9ce5 WatchSource:0}: Error finding container b8e51aeafea717937e3d72c0b02062acf48f5cbd695522061f0664d763ac9ce5: Status 404 returned error can't find the container with id b8e51aeafea717937e3d72c0b02062acf48f5cbd695522061f0664d763ac9ce5 Oct 11 07:54:37 crc kubenswrapper[5016]: E1011 07:54:37.499962 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 11 07:54:37 crc kubenswrapper[5016]: E1011 07:54:37.500384 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gszlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-758b79db4c-9b5sb_openstack(089d898a-11ca-4986-8dee-2efa6b4dd050): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 07:54:37 crc kubenswrapper[5016]: E1011 07:54:37.501890 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" podUID="089d898a-11ca-4986-8dee-2efa6b4dd050" Oct 11 07:54:37 crc kubenswrapper[5016]: E1011 07:54:37.502630 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df" Oct 11 07:54:37 crc kubenswrapper[5016]: E1011 07:54:37.502753 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45kq2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bfcb9d745-9nf4f_openstack(94aa76b7-b531-4632-b315-ce40f9a54e06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 07:54:37 crc kubenswrapper[5016]: E1011 07:54:37.506074 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" podUID="94aa76b7-b531-4632-b315-ce40f9a54e06" Oct 11 07:54:37 crc kubenswrapper[5016]: I1011 07:54:37.925398 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 07:54:37 crc kubenswrapper[5016]: I1011 07:54:37.939646 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77597f887-m66hn"] Oct 11 07:54:37 crc kubenswrapper[5016]: W1011 07:54:37.941791 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67a018eb_911e_4491_9dae_a1dfb3172e05.slice/crio-5e68c267711d3241cfc717117769c9604f357e37beae35151d523acfa8635879 WatchSource:0}: Error finding container 5e68c267711d3241cfc717117769c9604f357e37beae35151d523acfa8635879: Status 404 returned error can't find the container with id 5e68c267711d3241cfc717117769c9604f357e37beae35151d523acfa8635879 Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.048944 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.355469 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-db7s5"] Oct 11 07:54:38 crc kubenswrapper[5016]: W1011 07:54:38.378220 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69f3f361_bd63_4b18_afd7_3c64169af0a8.slice/crio-900aa8a7545105f153eae543669b2eeb06797f17ae86628191558551e83719f0 WatchSource:0}: Error finding container 900aa8a7545105f153eae543669b2eeb06797f17ae86628191558551e83719f0: Status 404 returned error can't find the container with id 900aa8a7545105f153eae543669b2eeb06797f17ae86628191558551e83719f0 Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.381294 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.399080 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 11 07:54:38 crc kubenswrapper[5016]: W1011 07:54:38.399832 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeee46d88_d3cf_428a_9808_f9bef1f292b7.slice/crio-be306543775fa6f981470f97f97a34ecd23fa771a0aee0be695da5b5695868ed WatchSource:0}: Error finding container be306543775fa6f981470f97f97a34ecd23fa771a0aee0be695da5b5695868ed: Status 404 returned error can't find the container with id be306543775fa6f981470f97f97a34ecd23fa771a0aee0be695da5b5695868ed Oct 11 07:54:38 crc kubenswrapper[5016]: W1011 07:54:38.405979 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d275b2c_beec_4696_a60f_6a31245767bb.slice/crio-48170e1837548e98378ce78319844ad3608b61999d843238479ce2c1e1ca12c2 WatchSource:0}: Error finding container 48170e1837548e98378ce78319844ad3608b61999d843238479ce2c1e1ca12c2: Status 404 returned error can't find the container with id 48170e1837548e98378ce78319844ad3608b61999d843238479ce2c1e1ca12c2 Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.409248 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 07:54:38 crc kubenswrapper[5016]: W1011 07:54:38.417830 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd360ab05_372a_4b41_8abb_2c2b4257123c.slice/crio-fca8ac86458843fecda9a0df16a3a7992351fd5b403adaff45207bbbacb3e78b WatchSource:0}: Error finding container fca8ac86458843fecda9a0df16a3a7992351fd5b403adaff45207bbbacb3e78b: Status 404 returned error can't find the container with id fca8ac86458843fecda9a0df16a3a7992351fd5b403adaff45207bbbacb3e78b Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.428763 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 11 07:54:38 crc kubenswrapper[5016]: W1011 07:54:38.439609 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6811d5d2_c174_41f6_a397_0bc4133297e9.slice/crio-949285df232b96e168affee998854a1a55a066ce1bcb4fe699ec8e090e8f2b8d WatchSource:0}: Error finding container 949285df232b96e168affee998854a1a55a066ce1bcb4fe699ec8e090e8f2b8d: Status 404 returned error can't find the container with id 949285df232b96e168affee998854a1a55a066ce1bcb4fe699ec8e090e8f2b8d Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.442352 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67a018eb-911e-4491-9dae-a1dfb3172e05","Type":"ContainerStarted","Data":"5e68c267711d3241cfc717117769c9604f357e37beae35151d523acfa8635879"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.444359 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d360ab05-372a-4b41-8abb-2c2b4257123c","Type":"ContainerStarted","Data":"fca8ac86458843fecda9a0df16a3a7992351fd5b403adaff45207bbbacb3e78b"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.446118 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bae29196-1d16-4563-9e7d-0981a96a352f","Type":"ContainerStarted","Data":"4e578620178d7cef30bbb0915f3be000b0cb7383e788ab9f16312fe5e07264a2"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.448280 5016 generic.go:334] "Generic (PLEG): container finished" podID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" containerID="e86c6cf4194f1a8aed5902496893af469e657fde65811a38c15d004bf1604745" exitCode=0 Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.448340 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-m66hn" event={"ID":"d7f492c7-31ee-406b-ad5e-9f6b8db63af0","Type":"ContainerDied","Data":"e86c6cf4194f1a8aed5902496893af469e657fde65811a38c15d004bf1604745"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.448366 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-m66hn" event={"ID":"d7f492c7-31ee-406b-ad5e-9f6b8db63af0","Type":"ContainerStarted","Data":"f43fa220d8cbdabc4f359ebc1e3058c937ea651957cb5a2fca4c44bc9338c725"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.453150 5016 generic.go:334] "Generic (PLEG): container finished" podID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" containerID="aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1" exitCode=0 Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.453235 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" event={"ID":"8c1afbf6-5654-4f6f-b4c3-70fc041b803a","Type":"ContainerDied","Data":"aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.453265 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" event={"ID":"8c1afbf6-5654-4f6f-b4c3-70fc041b803a","Type":"ContainerStarted","Data":"b8e51aeafea717937e3d72c0b02062acf48f5cbd695522061f0664d763ac9ce5"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.454550 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d275b2c-beec-4696-a60f-6a31245767bb","Type":"ContainerStarted","Data":"48170e1837548e98378ce78319844ad3608b61999d843238479ce2c1e1ca12c2"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.456204 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-db7s5" event={"ID":"69f3f361-bd63-4b18-afd7-3c64169af0a8","Type":"ContainerStarted","Data":"900aa8a7545105f153eae543669b2eeb06797f17ae86628191558551e83719f0"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.457970 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eee46d88-d3cf-428a-9808-f9bef1f292b7","Type":"ContainerStarted","Data":"be306543775fa6f981470f97f97a34ecd23fa771a0aee0be695da5b5695868ed"} Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.490749 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 11 07:54:38 crc kubenswrapper[5016]: W1011 07:54:38.509263 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod775dfe20_0fff_42b7_863a_76e8deb52526.slice/crio-1c730df541dba56230288d68e0aa9c8a864831b1e038c6ad372ae528f17c3130 WatchSource:0}: Error finding container 1c730df541dba56230288d68e0aa9c8a864831b1e038c6ad372ae528f17c3130: Status 404 returned error can't find the container with id 1c730df541dba56230288d68e0aa9c8a864831b1e038c6ad372ae528f17c3130 Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.597286 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w5nkt"] Oct 11 07:54:38 crc kubenswrapper[5016]: W1011 07:54:38.612454 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38a1ec3b_0cfb_4fdf_bcba_a434cf65a726.slice/crio-5c70ebb166c06d8944c54556363456958512d77fb428e2da2b821936c10cb6a3 WatchSource:0}: Error finding container 5c70ebb166c06d8944c54556363456958512d77fb428e2da2b821936c10cb6a3: Status 404 returned error can't find the container with id 5c70ebb166c06d8944c54556363456958512d77fb428e2da2b821936c10cb6a3 Oct 11 07:54:38 crc kubenswrapper[5016]: E1011 07:54:38.654185 5016 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 11 07:54:38 crc kubenswrapper[5016]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/8c1afbf6-5654-4f6f-b4c3-70fc041b803a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 11 07:54:38 crc kubenswrapper[5016]: > podSandboxID="b8e51aeafea717937e3d72c0b02062acf48f5cbd695522061f0664d763ac9ce5" Oct 11 07:54:38 crc kubenswrapper[5016]: E1011 07:54:38.654342 5016 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 11 07:54:38 crc kubenswrapper[5016]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:c4e71b2158fd939dad8b8e705273493051d3023273d23b279f2699dce6db33df,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jx5mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-644597f84c-cd8jm_openstack(8c1afbf6-5654-4f6f-b4c3-70fc041b803a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/8c1afbf6-5654-4f6f-b4c3-70fc041b803a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 11 07:54:38 crc kubenswrapper[5016]: > logger="UnhandledError" Oct 11 07:54:38 crc kubenswrapper[5016]: E1011 07:54:38.656330 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/8c1afbf6-5654-4f6f-b4c3-70fc041b803a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" podUID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.735897 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-lc7pq"] Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.737249 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.739111 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.748123 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lc7pq"] Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.860534 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/82c41981-5d91-478e-99ea-351277ca347e-ovs-rundir\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.860586 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s8r9\" (UniqueName: \"kubernetes.io/projected/82c41981-5d91-478e-99ea-351277ca347e-kube-api-access-9s8r9\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.860633 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c41981-5d91-478e-99ea-351277ca347e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.860683 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/82c41981-5d91-478e-99ea-351277ca347e-ovn-rundir\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.860721 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c41981-5d91-478e-99ea-351277ca347e-config\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.860759 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c41981-5d91-478e-99ea-351277ca347e-combined-ca-bundle\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.873500 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-cd8jm"] Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.898039 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.899629 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.905266 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-l84bj" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.905480 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.914269 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-6nblh"] Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.915498 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.919091 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.919317 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.927995 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.930894 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.940234 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-6nblh"] Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962339 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cc21a8-3016-4d64-9264-c153cf77e9a6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962426 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s8r9\" (UniqueName: \"kubernetes.io/projected/82c41981-5d91-478e-99ea-351277ca347e-kube-api-access-9s8r9\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962469 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0cc21a8-3016-4d64-9264-c153cf77e9a6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962506 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c41981-5d91-478e-99ea-351277ca347e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962535 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/82c41981-5d91-478e-99ea-351277ca347e-ovn-rundir\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962562 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c41981-5d91-478e-99ea-351277ca347e-config\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962584 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0cc21a8-3016-4d64-9264-c153cf77e9a6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962605 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkfbp\" (UniqueName: \"kubernetes.io/projected/a0cc21a8-3016-4d64-9264-c153cf77e9a6-kube-api-access-tkfbp\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962624 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cc21a8-3016-4d64-9264-c153cf77e9a6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962644 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c41981-5d91-478e-99ea-351277ca347e-combined-ca-bundle\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962745 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962779 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0cc21a8-3016-4d64-9264-c153cf77e9a6-config\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962815 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0cc21a8-3016-4d64-9264-c153cf77e9a6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.962840 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/82c41981-5d91-478e-99ea-351277ca347e-ovs-rundir\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.963128 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/82c41981-5d91-478e-99ea-351277ca347e-ovs-rundir\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.964056 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/82c41981-5d91-478e-99ea-351277ca347e-ovn-rundir\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.964590 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c41981-5d91-478e-99ea-351277ca347e-config\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.968040 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c41981-5d91-478e-99ea-351277ca347e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.972843 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c41981-5d91-478e-99ea-351277ca347e-combined-ca-bundle\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:38 crc kubenswrapper[5016]: I1011 07:54:38.988063 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s8r9\" (UniqueName: \"kubernetes.io/projected/82c41981-5d91-478e-99ea-351277ca347e-kube-api-access-9s8r9\") pod \"ovn-controller-metrics-lc7pq\" (UID: \"82c41981-5d91-478e-99ea-351277ca347e\") " pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.023119 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.046563 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.077494 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-lc7pq" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.077605 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94aa76b7-b531-4632-b315-ce40f9a54e06-config\") pod \"94aa76b7-b531-4632-b315-ce40f9a54e06\" (UID: \"94aa76b7-b531-4632-b315-ce40f9a54e06\") " Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.077680 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45kq2\" (UniqueName: \"kubernetes.io/projected/94aa76b7-b531-4632-b315-ce40f9a54e06-kube-api-access-45kq2\") pod \"94aa76b7-b531-4632-b315-ce40f9a54e06\" (UID: \"94aa76b7-b531-4632-b315-ce40f9a54e06\") " Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078197 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94aa76b7-b531-4632-b315-ce40f9a54e06-config" (OuterVolumeSpecName: "config") pod "94aa76b7-b531-4632-b315-ce40f9a54e06" (UID: "94aa76b7-b531-4632-b315-ce40f9a54e06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078211 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-config\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078245 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0cc21a8-3016-4d64-9264-c153cf77e9a6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078273 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgs94\" (UniqueName: \"kubernetes.io/projected/e70ed217-aa33-4652-9414-93dac22e3468-kube-api-access-pgs94\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078300 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkfbp\" (UniqueName: \"kubernetes.io/projected/a0cc21a8-3016-4d64-9264-c153cf77e9a6-kube-api-access-tkfbp\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078331 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cc21a8-3016-4d64-9264-c153cf77e9a6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078397 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078426 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078466 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0cc21a8-3016-4d64-9264-c153cf77e9a6-config\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078508 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0cc21a8-3016-4d64-9264-c153cf77e9a6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078535 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cc21a8-3016-4d64-9264-c153cf77e9a6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078570 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078597 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0cc21a8-3016-4d64-9264-c153cf77e9a6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.078693 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94aa76b7-b531-4632-b315-ce40f9a54e06-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.079126 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.079391 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a0cc21a8-3016-4d64-9264-c153cf77e9a6-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.079919 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a0cc21a8-3016-4d64-9264-c153cf77e9a6-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.080069 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0cc21a8-3016-4d64-9264-c153cf77e9a6-config\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.086644 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94aa76b7-b531-4632-b315-ce40f9a54e06-kube-api-access-45kq2" (OuterVolumeSpecName: "kube-api-access-45kq2") pod "94aa76b7-b531-4632-b315-ce40f9a54e06" (UID: "94aa76b7-b531-4632-b315-ce40f9a54e06"). InnerVolumeSpecName "kube-api-access-45kq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.087280 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cc21a8-3016-4d64-9264-c153cf77e9a6-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.092291 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0cc21a8-3016-4d64-9264-c153cf77e9a6-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.095870 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0cc21a8-3016-4d64-9264-c153cf77e9a6-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.100586 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkfbp\" (UniqueName: \"kubernetes.io/projected/a0cc21a8-3016-4d64-9264-c153cf77e9a6-kube-api-access-tkfbp\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.114814 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a0cc21a8-3016-4d64-9264-c153cf77e9a6\") " pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.179564 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-config\") pod \"089d898a-11ca-4986-8dee-2efa6b4dd050\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.179921 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-dns-svc\") pod \"089d898a-11ca-4986-8dee-2efa6b4dd050\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.179972 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gszlm\" (UniqueName: \"kubernetes.io/projected/089d898a-11ca-4986-8dee-2efa6b4dd050-kube-api-access-gszlm\") pod \"089d898a-11ca-4986-8dee-2efa6b4dd050\" (UID: \"089d898a-11ca-4986-8dee-2efa6b4dd050\") " Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.180022 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-config" (OuterVolumeSpecName: "config") pod "089d898a-11ca-4986-8dee-2efa6b4dd050" (UID: "089d898a-11ca-4986-8dee-2efa6b4dd050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.180349 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.180446 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-config\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.180491 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgs94\" (UniqueName: \"kubernetes.io/projected/e70ed217-aa33-4652-9414-93dac22e3468-kube-api-access-pgs94\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.180565 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.180778 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45kq2\" (UniqueName: \"kubernetes.io/projected/94aa76b7-b531-4632-b315-ce40f9a54e06-kube-api-access-45kq2\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.180814 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.181318 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "089d898a-11ca-4986-8dee-2efa6b4dd050" (UID: "089d898a-11ca-4986-8dee-2efa6b4dd050"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.182257 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-dns-svc\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.182474 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-config\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.184259 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-ovsdbserver-sb\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.186054 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/089d898a-11ca-4986-8dee-2efa6b4dd050-kube-api-access-gszlm" (OuterVolumeSpecName: "kube-api-access-gszlm") pod "089d898a-11ca-4986-8dee-2efa6b4dd050" (UID: "089d898a-11ca-4986-8dee-2efa6b4dd050"). InnerVolumeSpecName "kube-api-access-gszlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.203100 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgs94\" (UniqueName: \"kubernetes.io/projected/e70ed217-aa33-4652-9414-93dac22e3468-kube-api-access-pgs94\") pod \"dnsmasq-dns-545fb8c44f-6nblh\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.282613 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/089d898a-11ca-4986-8dee-2efa6b4dd050-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.282644 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gszlm\" (UniqueName: \"kubernetes.io/projected/089d898a-11ca-4986-8dee-2efa6b4dd050-kube-api-access-gszlm\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.332225 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.345240 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.471789 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-m66hn" event={"ID":"d7f492c7-31ee-406b-ad5e-9f6b8db63af0","Type":"ContainerStarted","Data":"645c74cd4d074f2068ec7e178fb1cf4d4773467650584d85382bef4d15b5a11b"} Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.471898 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.477076 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.477127 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bfcb9d745-9nf4f" event={"ID":"94aa76b7-b531-4632-b315-ce40f9a54e06","Type":"ContainerDied","Data":"89d629322a42900dc258666ed4a88a69e5548ff3a58b698606af795694b7cbea"} Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.478330 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5nkt" event={"ID":"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726","Type":"ContainerStarted","Data":"5c70ebb166c06d8944c54556363456958512d77fb428e2da2b821936c10cb6a3"} Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.487720 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6811d5d2-c174-41f6-a397-0bc4133297e9","Type":"ContainerStarted","Data":"949285df232b96e168affee998854a1a55a066ce1bcb4fe699ec8e090e8f2b8d"} Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.512086 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77597f887-m66hn" podStartSLOduration=13.51207026 podStartE2EDuration="13.51207026s" podCreationTimestamp="2025-10-11 07:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:54:39.511369637 +0000 UTC m=+867.411825583" watchObservedRunningTime="2025-10-11 07:54:39.51207026 +0000 UTC m=+867.412526196" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.532946 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"775dfe20-0fff-42b7-863a-76e8deb52526","Type":"ContainerStarted","Data":"1c730df541dba56230288d68e0aa9c8a864831b1e038c6ad372ae528f17c3130"} Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.537059 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.537236 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-758b79db4c-9b5sb" event={"ID":"089d898a-11ca-4986-8dee-2efa6b4dd050","Type":"ContainerDied","Data":"19505dc4dfd53ce1bf0331c26fb66e5de3d2c33ccb599d1dcd55a94d68f6482c"} Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.562636 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-9nf4f"] Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.569808 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bfcb9d745-9nf4f"] Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.597257 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-9b5sb"] Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.602038 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-758b79db4c-9b5sb"] Oct 11 07:54:39 crc kubenswrapper[5016]: I1011 07:54:39.674105 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-lc7pq"] Oct 11 07:54:40 crc kubenswrapper[5016]: I1011 07:54:40.230039 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-6nblh"] Oct 11 07:54:40 crc kubenswrapper[5016]: I1011 07:54:40.549760 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lc7pq" event={"ID":"82c41981-5d91-478e-99ea-351277ca347e","Type":"ContainerStarted","Data":"84fe1d570d343c324d91e291ae028e3ff0a5def9c2c8b7a2262624974fc0d3c8"} Oct 11 07:54:40 crc kubenswrapper[5016]: I1011 07:54:40.678176 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 11 07:54:41 crc kubenswrapper[5016]: I1011 07:54:41.141917 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="089d898a-11ca-4986-8dee-2efa6b4dd050" path="/var/lib/kubelet/pods/089d898a-11ca-4986-8dee-2efa6b4dd050/volumes" Oct 11 07:54:41 crc kubenswrapper[5016]: I1011 07:54:41.142527 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94aa76b7-b531-4632-b315-ce40f9a54e06" path="/var/lib/kubelet/pods/94aa76b7-b531-4632-b315-ce40f9a54e06/volumes" Oct 11 07:54:41 crc kubenswrapper[5016]: I1011 07:54:41.566221 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" event={"ID":"e70ed217-aa33-4652-9414-93dac22e3468","Type":"ContainerStarted","Data":"a5fdecd5af13ada8f3d9bcd977e75148395a27ebcd97988d19a1a394c094fedd"} Oct 11 07:54:42 crc kubenswrapper[5016]: I1011 07:54:42.575610 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a0cc21a8-3016-4d64-9264-c153cf77e9a6","Type":"ContainerStarted","Data":"28ede9f436c851fd425c917452bbc55dd32957bcb7e7ca3bd42f72df18dca39a"} Oct 11 07:54:46 crc kubenswrapper[5016]: I1011 07:54:46.465888 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.635225 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-lc7pq" event={"ID":"82c41981-5d91-478e-99ea-351277ca347e","Type":"ContainerStarted","Data":"b68b316c00f11b543493632d1af7af14d2b78eb00ade01fbc09d4e0b9594e780"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.637304 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" event={"ID":"8c1afbf6-5654-4f6f-b4c3-70fc041b803a","Type":"ContainerStarted","Data":"ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.637406 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" podUID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" containerName="dnsmasq-dns" containerID="cri-o://ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be" gracePeriod=10 Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.637477 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.641062 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6811d5d2-c174-41f6-a397-0bc4133297e9","Type":"ContainerStarted","Data":"b8659150a44e43f26a9462cc985e8e8dab19ce8d7ad1b2d76e84f67d82fc549d"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.641178 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.646823 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-db7s5" event={"ID":"69f3f361-bd63-4b18-afd7-3c64169af0a8","Type":"ContainerStarted","Data":"bbfffb6bdc15ffb446b251e0287cb1ebc781ead3ae14235b1876a07821495ec2"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.647326 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-db7s5" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.650624 5016 generic.go:334] "Generic (PLEG): container finished" podID="e70ed217-aa33-4652-9414-93dac22e3468" containerID="7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621" exitCode=0 Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.650673 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" event={"ID":"e70ed217-aa33-4652-9414-93dac22e3468","Type":"ContainerDied","Data":"7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.653061 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-lc7pq" podStartSLOduration=3.31230252 podStartE2EDuration="11.653049853s" podCreationTimestamp="2025-10-11 07:54:38 +0000 UTC" firstStartedPulling="2025-10-11 07:54:40.180563297 +0000 UTC m=+868.081019243" lastFinishedPulling="2025-10-11 07:54:48.52131063 +0000 UTC m=+876.421766576" observedRunningTime="2025-10-11 07:54:49.651464754 +0000 UTC m=+877.551920690" watchObservedRunningTime="2025-10-11 07:54:49.653049853 +0000 UTC m=+877.553505799" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.659676 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d360ab05-372a-4b41-8abb-2c2b4257123c","Type":"ContainerStarted","Data":"bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.659822 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.672420 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-db7s5" podStartSLOduration=4.648115564 podStartE2EDuration="14.672403209s" podCreationTimestamp="2025-10-11 07:54:35 +0000 UTC" firstStartedPulling="2025-10-11 07:54:38.38100185 +0000 UTC m=+866.281457796" lastFinishedPulling="2025-10-11 07:54:48.405289495 +0000 UTC m=+876.305745441" observedRunningTime="2025-10-11 07:54:49.671046295 +0000 UTC m=+877.571502241" watchObservedRunningTime="2025-10-11 07:54:49.672403209 +0000 UTC m=+877.572859155" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.678026 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a0cc21a8-3016-4d64-9264-c153cf77e9a6","Type":"ContainerStarted","Data":"7a7e1325a4943262a8fcb3561c7b5a0af91a77e77a9dc2e8b6724065f0d3d4fa"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.681121 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5nkt" event={"ID":"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726","Type":"ContainerStarted","Data":"52af849e2f3cc468a4f8b898b1726b18c823d17df104d69ecc62eebc78ed06a6"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.683775 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d275b2c-beec-4696-a60f-6a31245767bb","Type":"ContainerStarted","Data":"e5da9a4e575bc29924fc443175804e754378b074ceb2d63d4be809cf9edfb6ae"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.704716 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"775dfe20-0fff-42b7-863a-76e8deb52526","Type":"ContainerStarted","Data":"9682f93b29952cd4af5565e6e764220ad01643bc7850117b5527a467d192e12c"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.704755 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"775dfe20-0fff-42b7-863a-76e8deb52526","Type":"ContainerStarted","Data":"5ccba47502606ea2cb386491a602ebb90cec4c1334f058bee9605e9ada3a3cff"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.713040 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eee46d88-d3cf-428a-9808-f9bef1f292b7","Type":"ContainerStarted","Data":"cbbecf860b23866d19eeac9e1452af3cdab7d63ce7f0b5a04193dd342cfcd451"} Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.713411 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" podStartSLOduration=24.238787524 podStartE2EDuration="24.713390813s" podCreationTimestamp="2025-10-11 07:54:25 +0000 UTC" firstStartedPulling="2025-10-11 07:54:37.444677432 +0000 UTC m=+865.345133388" lastFinishedPulling="2025-10-11 07:54:37.919280731 +0000 UTC m=+865.819736677" observedRunningTime="2025-10-11 07:54:49.69927519 +0000 UTC m=+877.599731146" watchObservedRunningTime="2025-10-11 07:54:49.713390813 +0000 UTC m=+877.613846749" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.731118 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=10.848127036 podStartE2EDuration="19.731093119s" podCreationTimestamp="2025-10-11 07:54:30 +0000 UTC" firstStartedPulling="2025-10-11 07:54:38.44200351 +0000 UTC m=+866.342459456" lastFinishedPulling="2025-10-11 07:54:47.324969593 +0000 UTC m=+875.225425539" observedRunningTime="2025-10-11 07:54:49.728748678 +0000 UTC m=+877.629204634" watchObservedRunningTime="2025-10-11 07:54:49.731093119 +0000 UTC m=+877.631549065" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.790718 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.617225979 podStartE2EDuration="17.790698155s" podCreationTimestamp="2025-10-11 07:54:32 +0000 UTC" firstStartedPulling="2025-10-11 07:54:38.423896706 +0000 UTC m=+866.324352652" lastFinishedPulling="2025-10-11 07:54:48.597368882 +0000 UTC m=+876.497824828" observedRunningTime="2025-10-11 07:54:49.78988991 +0000 UTC m=+877.690345856" watchObservedRunningTime="2025-10-11 07:54:49.790698155 +0000 UTC m=+877.691154101" Oct 11 07:54:49 crc kubenswrapper[5016]: I1011 07:54:49.868756 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.360323392 podStartE2EDuration="14.868713711s" podCreationTimestamp="2025-10-11 07:54:35 +0000 UTC" firstStartedPulling="2025-10-11 07:54:38.515852891 +0000 UTC m=+866.416308837" lastFinishedPulling="2025-10-11 07:54:48.02424321 +0000 UTC m=+875.924699156" observedRunningTime="2025-10-11 07:54:49.852679294 +0000 UTC m=+877.753135240" watchObservedRunningTime="2025-10-11 07:54:49.868713711 +0000 UTC m=+877.769169667" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.146360 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.269047 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-config\") pod \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.269226 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx5mv\" (UniqueName: \"kubernetes.io/projected/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-kube-api-access-jx5mv\") pod \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.269327 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-dns-svc\") pod \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\" (UID: \"8c1afbf6-5654-4f6f-b4c3-70fc041b803a\") " Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.282212 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-kube-api-access-jx5mv" (OuterVolumeSpecName: "kube-api-access-jx5mv") pod "8c1afbf6-5654-4f6f-b4c3-70fc041b803a" (UID: "8c1afbf6-5654-4f6f-b4c3-70fc041b803a"). InnerVolumeSpecName "kube-api-access-jx5mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.313301 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-6nblh"] Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.319282 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c1afbf6-5654-4f6f-b4c3-70fc041b803a" (UID: "8c1afbf6-5654-4f6f-b4c3-70fc041b803a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.335432 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-config" (OuterVolumeSpecName: "config") pod "8c1afbf6-5654-4f6f-b4c3-70fc041b803a" (UID: "8c1afbf6-5654-4f6f-b4c3-70fc041b803a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.368082 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-pp672"] Oct 11 07:54:50 crc kubenswrapper[5016]: E1011 07:54:50.368470 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" containerName="dnsmasq-dns" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.368486 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" containerName="dnsmasq-dns" Oct 11 07:54:50 crc kubenswrapper[5016]: E1011 07:54:50.368494 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" containerName="init" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.368501 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" containerName="init" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.368729 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" containerName="dnsmasq-dns" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.370602 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.370624 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.370634 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx5mv\" (UniqueName: \"kubernetes.io/projected/8c1afbf6-5654-4f6f-b4c3-70fc041b803a-kube-api-access-jx5mv\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.370726 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.379839 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.388019 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-pp672"] Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.471934 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72nj4\" (UniqueName: \"kubernetes.io/projected/c213bf50-5935-48bb-be54-2e1396bc6e06-kube-api-access-72nj4\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.472580 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-nb\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.472628 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-sb\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.472736 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-dns-svc\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.472825 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-config\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.574159 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-sb\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.574286 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-dns-svc\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.574306 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-config\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.574345 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72nj4\" (UniqueName: \"kubernetes.io/projected/c213bf50-5935-48bb-be54-2e1396bc6e06-kube-api-access-72nj4\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.574363 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-nb\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.575080 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-nb\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.575129 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-sb\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.575153 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-dns-svc\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.575694 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-config\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.596351 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72nj4\" (UniqueName: \"kubernetes.io/projected/c213bf50-5935-48bb-be54-2e1396bc6e06-kube-api-access-72nj4\") pod \"dnsmasq-dns-dc9d58d7-pp672\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.721628 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67a018eb-911e-4491-9dae-a1dfb3172e05","Type":"ContainerStarted","Data":"eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad"} Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.723114 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.735285 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bae29196-1d16-4563-9e7d-0981a96a352f","Type":"ContainerStarted","Data":"3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c"} Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.750426 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a0cc21a8-3016-4d64-9264-c153cf77e9a6","Type":"ContainerStarted","Data":"f5c4cb06d2f6976e12151eab225bf44cfb8ae3db9ce86cc84787f0fafbef553a"} Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.753402 5016 generic.go:334] "Generic (PLEG): container finished" podID="38a1ec3b-0cfb-4fdf-bcba-a434cf65a726" containerID="52af849e2f3cc468a4f8b898b1726b18c823d17df104d69ecc62eebc78ed06a6" exitCode=0 Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.753526 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5nkt" event={"ID":"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726","Type":"ContainerDied","Data":"52af849e2f3cc468a4f8b898b1726b18c823d17df104d69ecc62eebc78ed06a6"} Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.755501 5016 generic.go:334] "Generic (PLEG): container finished" podID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" containerID="ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be" exitCode=0 Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.755572 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.755590 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" event={"ID":"8c1afbf6-5654-4f6f-b4c3-70fc041b803a","Type":"ContainerDied","Data":"ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be"} Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.755626 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-644597f84c-cd8jm" event={"ID":"8c1afbf6-5654-4f6f-b4c3-70fc041b803a","Type":"ContainerDied","Data":"b8e51aeafea717937e3d72c0b02062acf48f5cbd695522061f0664d763ac9ce5"} Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.755643 5016 scope.go:117] "RemoveContainer" containerID="ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.758724 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" event={"ID":"e70ed217-aa33-4652-9414-93dac22e3468","Type":"ContainerStarted","Data":"b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b"} Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.758835 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.806974 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" podStartSLOduration=12.806957752 podStartE2EDuration="12.806957752s" podCreationTimestamp="2025-10-11 07:54:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:54:50.79560847 +0000 UTC m=+878.696064416" watchObservedRunningTime="2025-10-11 07:54:50.806957752 +0000 UTC m=+878.707413698" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.843445 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.773920855 podStartE2EDuration="13.843426705s" podCreationTimestamp="2025-10-11 07:54:37 +0000 UTC" firstStartedPulling="2025-10-11 07:54:42.360889864 +0000 UTC m=+870.261345820" lastFinishedPulling="2025-10-11 07:54:48.430395724 +0000 UTC m=+876.330851670" observedRunningTime="2025-10-11 07:54:50.83704611 +0000 UTC m=+878.737502056" watchObservedRunningTime="2025-10-11 07:54:50.843426705 +0000 UTC m=+878.743882651" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.852817 5016 scope.go:117] "RemoveContainer" containerID="aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.896547 5016 scope.go:117] "RemoveContainer" containerID="ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be" Oct 11 07:54:50 crc kubenswrapper[5016]: E1011 07:54:50.901064 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be\": container with ID starting with ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be not found: ID does not exist" containerID="ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.901179 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be"} err="failed to get container status \"ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be\": rpc error: code = NotFound desc = could not find container \"ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be\": container with ID starting with ae62f9c23efccfd6e53324c851e691274d1f4be9bc1c65125c834f3967c138be not found: ID does not exist" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.901204 5016 scope.go:117] "RemoveContainer" containerID="aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.904960 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-cd8jm"] Oct 11 07:54:50 crc kubenswrapper[5016]: E1011 07:54:50.915436 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1\": container with ID starting with aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1 not found: ID does not exist" containerID="aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.915750 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1"} err="failed to get container status \"aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1\": rpc error: code = NotFound desc = could not find container \"aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1\": container with ID starting with aaadd4c6264caa7f2e39fcc5819b1d5df53857317b92bb55c51015a3b07bccb1 not found: ID does not exist" Oct 11 07:54:50 crc kubenswrapper[5016]: I1011 07:54:50.943597 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-644597f84c-cd8jm"] Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.144418 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1afbf6-5654-4f6f-b4c3-70fc041b803a" path="/var/lib/kubelet/pods/8c1afbf6-5654-4f6f-b4c3-70fc041b803a/volumes" Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.203798 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-pp672"] Oct 11 07:54:51 crc kubenswrapper[5016]: W1011 07:54:51.213203 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc213bf50_5935_48bb_be54_2e1396bc6e06.slice/crio-e6200759d2e02f1dbdb77d389f685bb9c944258bb001ef891bfc962680c27bda WatchSource:0}: Error finding container e6200759d2e02f1dbdb77d389f685bb9c944258bb001ef891bfc962680c27bda: Status 404 returned error can't find the container with id e6200759d2e02f1dbdb77d389f685bb9c944258bb001ef891bfc962680c27bda Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.333060 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.767170 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5nkt" event={"ID":"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726","Type":"ContainerStarted","Data":"f42d14ccc290499279b411003948c708fb52e85e0cd08918e9b56c212372dac0"} Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.767444 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5nkt" event={"ID":"38a1ec3b-0cfb-4fdf-bcba-a434cf65a726","Type":"ContainerStarted","Data":"859bc864c3e2e18c40c3d0daec263361799b43b34119a416bbefa95bfba95db6"} Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.767651 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.769488 5016 generic.go:334] "Generic (PLEG): container finished" podID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerID="59c5fc7717bcb2fe3c87521ee9e80c198fad56496e34e9507f8d58d3ea5bd065" exitCode=0 Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.769634 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" event={"ID":"c213bf50-5935-48bb-be54-2e1396bc6e06","Type":"ContainerDied","Data":"59c5fc7717bcb2fe3c87521ee9e80c198fad56496e34e9507f8d58d3ea5bd065"} Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.769716 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" event={"ID":"c213bf50-5935-48bb-be54-2e1396bc6e06","Type":"ContainerStarted","Data":"e6200759d2e02f1dbdb77d389f685bb9c944258bb001ef891bfc962680c27bda"} Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.769934 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" podUID="e70ed217-aa33-4652-9414-93dac22e3468" containerName="dnsmasq-dns" containerID="cri-o://b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b" gracePeriod=10 Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.798422 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-w5nkt" podStartSLOduration=7.649330004 podStartE2EDuration="16.798399565s" podCreationTimestamp="2025-10-11 07:54:35 +0000 UTC" firstStartedPulling="2025-10-11 07:54:38.614856063 +0000 UTC m=+866.515312009" lastFinishedPulling="2025-10-11 07:54:47.763925634 +0000 UTC m=+875.664381570" observedRunningTime="2025-10-11 07:54:51.792015591 +0000 UTC m=+879.692471547" watchObservedRunningTime="2025-10-11 07:54:51.798399565 +0000 UTC m=+879.698855511" Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.998229 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:51 crc kubenswrapper[5016]: I1011 07:54:51.998282 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.048391 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.135791 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.212339 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-ovsdbserver-sb\") pod \"e70ed217-aa33-4652-9414-93dac22e3468\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.212533 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-dns-svc\") pod \"e70ed217-aa33-4652-9414-93dac22e3468\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.212607 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgs94\" (UniqueName: \"kubernetes.io/projected/e70ed217-aa33-4652-9414-93dac22e3468-kube-api-access-pgs94\") pod \"e70ed217-aa33-4652-9414-93dac22e3468\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.212644 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-config\") pod \"e70ed217-aa33-4652-9414-93dac22e3468\" (UID: \"e70ed217-aa33-4652-9414-93dac22e3468\") " Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.216565 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70ed217-aa33-4652-9414-93dac22e3468-kube-api-access-pgs94" (OuterVolumeSpecName: "kube-api-access-pgs94") pod "e70ed217-aa33-4652-9414-93dac22e3468" (UID: "e70ed217-aa33-4652-9414-93dac22e3468"). InnerVolumeSpecName "kube-api-access-pgs94". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.246303 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-config" (OuterVolumeSpecName: "config") pod "e70ed217-aa33-4652-9414-93dac22e3468" (UID: "e70ed217-aa33-4652-9414-93dac22e3468"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.248152 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e70ed217-aa33-4652-9414-93dac22e3468" (UID: "e70ed217-aa33-4652-9414-93dac22e3468"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.250574 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e70ed217-aa33-4652-9414-93dac22e3468" (UID: "e70ed217-aa33-4652-9414-93dac22e3468"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.314579 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.314610 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgs94\" (UniqueName: \"kubernetes.io/projected/e70ed217-aa33-4652-9414-93dac22e3468-kube-api-access-pgs94\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.314619 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.314629 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e70ed217-aa33-4652-9414-93dac22e3468-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.776215 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" event={"ID":"c213bf50-5935-48bb-be54-2e1396bc6e06","Type":"ContainerStarted","Data":"771deb6c67292b4198ad4ea96f2b0f16331d4e77e89e4cadcd7a8338abfe354f"} Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.776388 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.777607 5016 generic.go:334] "Generic (PLEG): container finished" podID="9d275b2c-beec-4696-a60f-6a31245767bb" containerID="e5da9a4e575bc29924fc443175804e754378b074ceb2d63d4be809cf9edfb6ae" exitCode=0 Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.777672 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d275b2c-beec-4696-a60f-6a31245767bb","Type":"ContainerDied","Data":"e5da9a4e575bc29924fc443175804e754378b074ceb2d63d4be809cf9edfb6ae"} Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.779310 5016 generic.go:334] "Generic (PLEG): container finished" podID="e70ed217-aa33-4652-9414-93dac22e3468" containerID="b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b" exitCode=0 Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.779353 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" event={"ID":"e70ed217-aa33-4652-9414-93dac22e3468","Type":"ContainerDied","Data":"b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b"} Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.779369 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" event={"ID":"e70ed217-aa33-4652-9414-93dac22e3468","Type":"ContainerDied","Data":"a5fdecd5af13ada8f3d9bcd977e75148395a27ebcd97988d19a1a394c094fedd"} Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.779385 5016 scope.go:117] "RemoveContainer" containerID="b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.779439 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545fb8c44f-6nblh" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.784298 5016 generic.go:334] "Generic (PLEG): container finished" podID="eee46d88-d3cf-428a-9808-f9bef1f292b7" containerID="cbbecf860b23866d19eeac9e1452af3cdab7d63ce7f0b5a04193dd342cfcd451" exitCode=0 Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.784465 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eee46d88-d3cf-428a-9808-f9bef1f292b7","Type":"ContainerDied","Data":"cbbecf860b23866d19eeac9e1452af3cdab7d63ce7f0b5a04193dd342cfcd451"} Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.784600 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.802629 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" podStartSLOduration=2.8026119769999998 podStartE2EDuration="2.802611977s" podCreationTimestamp="2025-10-11 07:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:54:52.796525558 +0000 UTC m=+880.696981504" watchObservedRunningTime="2025-10-11 07:54:52.802611977 +0000 UTC m=+880.703067923" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.942635 5016 scope.go:117] "RemoveContainer" containerID="7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.993250 5016 scope.go:117] "RemoveContainer" containerID="b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b" Oct 11 07:54:52 crc kubenswrapper[5016]: E1011 07:54:52.993662 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b\": container with ID starting with b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b not found: ID does not exist" containerID="b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.993708 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b"} err="failed to get container status \"b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b\": rpc error: code = NotFound desc = could not find container \"b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b\": container with ID starting with b2869711adc8d3753da1d35019d45900b32a14087a53592d7ff912b9c1491d3b not found: ID does not exist" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.993733 5016 scope.go:117] "RemoveContainer" containerID="7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621" Oct 11 07:54:52 crc kubenswrapper[5016]: E1011 07:54:52.994108 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621\": container with ID starting with 7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621 not found: ID does not exist" containerID="7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621" Oct 11 07:54:52 crc kubenswrapper[5016]: I1011 07:54:52.994134 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621"} err="failed to get container status \"7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621\": rpc error: code = NotFound desc = could not find container \"7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621\": container with ID starting with 7448b5861f09900368f7cc5d232b2ffb27c9aff61f6f443100b926ae60c77621 not found: ID does not exist" Oct 11 07:54:53 crc kubenswrapper[5016]: I1011 07:54:53.066271 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-6nblh"] Oct 11 07:54:53 crc kubenswrapper[5016]: I1011 07:54:53.072216 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545fb8c44f-6nblh"] Oct 11 07:54:53 crc kubenswrapper[5016]: I1011 07:54:53.153044 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e70ed217-aa33-4652-9414-93dac22e3468" path="/var/lib/kubelet/pods/e70ed217-aa33-4652-9414-93dac22e3468/volumes" Oct 11 07:54:53 crc kubenswrapper[5016]: I1011 07:54:53.793620 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"eee46d88-d3cf-428a-9808-f9bef1f292b7","Type":"ContainerStarted","Data":"8370ce0dc9b8c55665ba93bc601cccc1f78920ad8cae0eb6c3a80417c50464f8"} Oct 11 07:54:53 crc kubenswrapper[5016]: I1011 07:54:53.796271 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d275b2c-beec-4696-a60f-6a31245767bb","Type":"ContainerStarted","Data":"c6cf3f81a768bde9c2cb300b3cf85047b4e517fcea520c62fbf908508ce566e6"} Oct 11 07:54:53 crc kubenswrapper[5016]: I1011 07:54:53.812494 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=16.812844234 podStartE2EDuration="26.812479479s" podCreationTimestamp="2025-10-11 07:54:27 +0000 UTC" firstStartedPulling="2025-10-11 07:54:38.405571329 +0000 UTC m=+866.306027275" lastFinishedPulling="2025-10-11 07:54:48.405206574 +0000 UTC m=+876.305662520" observedRunningTime="2025-10-11 07:54:53.809342173 +0000 UTC m=+881.709798119" watchObservedRunningTime="2025-10-11 07:54:53.812479479 +0000 UTC m=+881.712935425" Oct 11 07:54:53 crc kubenswrapper[5016]: I1011 07:54:53.837320 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.481421263 podStartE2EDuration="25.837302333s" podCreationTimestamp="2025-10-11 07:54:28 +0000 UTC" firstStartedPulling="2025-10-11 07:54:38.408250737 +0000 UTC m=+866.308706683" lastFinishedPulling="2025-10-11 07:54:47.764131807 +0000 UTC m=+875.664587753" observedRunningTime="2025-10-11 07:54:53.831569491 +0000 UTC m=+881.732025437" watchObservedRunningTime="2025-10-11 07:54:53.837302333 +0000 UTC m=+881.737758279" Oct 11 07:54:54 crc kubenswrapper[5016]: I1011 07:54:54.333154 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:54 crc kubenswrapper[5016]: I1011 07:54:54.382521 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:54 crc kubenswrapper[5016]: I1011 07:54:54.853965 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 11 07:54:56 crc kubenswrapper[5016]: I1011 07:54:55.508804 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.042320 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.263180 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 11 07:54:57 crc kubenswrapper[5016]: E1011 07:54:57.263594 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70ed217-aa33-4652-9414-93dac22e3468" containerName="dnsmasq-dns" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.263621 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70ed217-aa33-4652-9414-93dac22e3468" containerName="dnsmasq-dns" Oct 11 07:54:57 crc kubenswrapper[5016]: E1011 07:54:57.263641 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70ed217-aa33-4652-9414-93dac22e3468" containerName="init" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.263669 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70ed217-aa33-4652-9414-93dac22e3468" containerName="init" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.263892 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70ed217-aa33-4652-9414-93dac22e3468" containerName="dnsmasq-dns" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.264871 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.267703 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.267975 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.268012 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jpw7c" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.268226 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.276275 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.406913 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c9z4\" (UniqueName: \"kubernetes.io/projected/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-kube-api-access-9c9z4\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.407260 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.407328 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-scripts\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.407505 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.407644 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-config\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.407694 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.407752 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.508873 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-config\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.508929 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.508964 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.508986 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c9z4\" (UniqueName: \"kubernetes.io/projected/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-kube-api-access-9c9z4\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.509009 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-scripts\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.509024 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.509066 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.509457 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.510238 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-scripts\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.510268 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-config\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.514711 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.514830 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.519225 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.530172 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c9z4\" (UniqueName: \"kubernetes.io/projected/d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c-kube-api-access-9c9z4\") pod \"ovn-northd-0\" (UID: \"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c\") " pod="openstack/ovn-northd-0" Oct 11 07:54:57 crc kubenswrapper[5016]: I1011 07:54:57.589478 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 11 07:54:58 crc kubenswrapper[5016]: I1011 07:54:58.063421 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 11 07:54:58 crc kubenswrapper[5016]: W1011 07:54:58.070574 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7df8570_ef3c_4ad2_bcee_a6d87fb9cd7c.slice/crio-5adb0ead9474ae8951e00aff91d5eb2a63bdaeede15e35ca69370961de547f5a WatchSource:0}: Error finding container 5adb0ead9474ae8951e00aff91d5eb2a63bdaeede15e35ca69370961de547f5a: Status 404 returned error can't find the container with id 5adb0ead9474ae8951e00aff91d5eb2a63bdaeede15e35ca69370961de547f5a Oct 11 07:54:58 crc kubenswrapper[5016]: I1011 07:54:58.076671 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 07:54:58 crc kubenswrapper[5016]: I1011 07:54:58.846430 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c","Type":"ContainerStarted","Data":"5adb0ead9474ae8951e00aff91d5eb2a63bdaeede15e35ca69370961de547f5a"} Oct 11 07:54:58 crc kubenswrapper[5016]: I1011 07:54:58.893431 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 11 07:54:58 crc kubenswrapper[5016]: I1011 07:54:58.893489 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 11 07:55:00 crc kubenswrapper[5016]: I1011 07:55:00.230865 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 11 07:55:00 crc kubenswrapper[5016]: I1011 07:55:00.230938 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 11 07:55:00 crc kubenswrapper[5016]: I1011 07:55:00.724506 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:55:00 crc kubenswrapper[5016]: I1011 07:55:00.778587 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77597f887-m66hn"] Oct 11 07:55:00 crc kubenswrapper[5016]: I1011 07:55:00.778864 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77597f887-m66hn" podUID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" containerName="dnsmasq-dns" containerID="cri-o://645c74cd4d074f2068ec7e178fb1cf4d4773467650584d85382bef4d15b5a11b" gracePeriod=10 Oct 11 07:55:01 crc kubenswrapper[5016]: I1011 07:55:01.464587 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77597f887-m66hn" podUID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.99:5353: connect: connection refused" Oct 11 07:55:01 crc kubenswrapper[5016]: I1011 07:55:01.883319 5016 generic.go:334] "Generic (PLEG): container finished" podID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" containerID="645c74cd4d074f2068ec7e178fb1cf4d4773467650584d85382bef4d15b5a11b" exitCode=0 Oct 11 07:55:01 crc kubenswrapper[5016]: I1011 07:55:01.883388 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-m66hn" event={"ID":"d7f492c7-31ee-406b-ad5e-9f6b8db63af0","Type":"ContainerDied","Data":"645c74cd4d074f2068ec7e178fb1cf4d4773467650584d85382bef4d15b5a11b"} Oct 11 07:55:02 crc kubenswrapper[5016]: I1011 07:55:02.455929 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 11 07:55:02 crc kubenswrapper[5016]: I1011 07:55:02.890879 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77597f887-m66hn" event={"ID":"d7f492c7-31ee-406b-ad5e-9f6b8db63af0","Type":"ContainerDied","Data":"f43fa220d8cbdabc4f359ebc1e3058c937ea651957cb5a2fca4c44bc9338c725"} Oct 11 07:55:02 crc kubenswrapper[5016]: I1011 07:55:02.891191 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f43fa220d8cbdabc4f359ebc1e3058c937ea651957cb5a2fca4c44bc9338c725" Oct 11 07:55:02 crc kubenswrapper[5016]: I1011 07:55:02.925111 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:55:02 crc kubenswrapper[5016]: I1011 07:55:02.996336 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-config\") pod \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " Oct 11 07:55:02 crc kubenswrapper[5016]: I1011 07:55:02.996396 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpl7z\" (UniqueName: \"kubernetes.io/projected/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-kube-api-access-dpl7z\") pod \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " Oct 11 07:55:02 crc kubenswrapper[5016]: I1011 07:55:02.996570 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-dns-svc\") pod \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\" (UID: \"d7f492c7-31ee-406b-ad5e-9f6b8db63af0\") " Oct 11 07:55:03 crc kubenswrapper[5016]: I1011 07:55:03.003778 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-kube-api-access-dpl7z" (OuterVolumeSpecName: "kube-api-access-dpl7z") pod "d7f492c7-31ee-406b-ad5e-9f6b8db63af0" (UID: "d7f492c7-31ee-406b-ad5e-9f6b8db63af0"). InnerVolumeSpecName "kube-api-access-dpl7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:03 crc kubenswrapper[5016]: I1011 07:55:03.031295 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7f492c7-31ee-406b-ad5e-9f6b8db63af0" (UID: "d7f492c7-31ee-406b-ad5e-9f6b8db63af0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:03 crc kubenswrapper[5016]: I1011 07:55:03.040505 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-config" (OuterVolumeSpecName: "config") pod "d7f492c7-31ee-406b-ad5e-9f6b8db63af0" (UID: "d7f492c7-31ee-406b-ad5e-9f6b8db63af0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:03 crc kubenswrapper[5016]: I1011 07:55:03.098402 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:03 crc kubenswrapper[5016]: I1011 07:55:03.098682 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:03 crc kubenswrapper[5016]: I1011 07:55:03.098696 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpl7z\" (UniqueName: \"kubernetes.io/projected/d7f492c7-31ee-406b-ad5e-9f6b8db63af0-kube-api-access-dpl7z\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:03 crc kubenswrapper[5016]: I1011 07:55:03.897831 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77597f887-m66hn" Oct 11 07:55:03 crc kubenswrapper[5016]: I1011 07:55:03.920955 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77597f887-m66hn"] Oct 11 07:55:03 crc kubenswrapper[5016]: I1011 07:55:03.928968 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77597f887-m66hn"] Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.029079 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.078177 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.143655 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" path="/var/lib/kubelet/pods/d7f492c7-31ee-406b-ad5e-9f6b8db63af0/volumes" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.747247 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-sbj8x"] Oct 11 07:55:05 crc kubenswrapper[5016]: E1011 07:55:05.747956 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" containerName="init" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.747977 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" containerName="init" Oct 11 07:55:05 crc kubenswrapper[5016]: E1011 07:55:05.747998 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" containerName="dnsmasq-dns" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.748006 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" containerName="dnsmasq-dns" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.748212 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f492c7-31ee-406b-ad5e-9f6b8db63af0" containerName="dnsmasq-dns" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.748884 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sbj8x" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.755132 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sbj8x"] Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.849513 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86c6q\" (UniqueName: \"kubernetes.io/projected/dfcd1056-e001-48cb-9588-9c664ae140a2-kube-api-access-86c6q\") pod \"glance-db-create-sbj8x\" (UID: \"dfcd1056-e001-48cb-9588-9c664ae140a2\") " pod="openstack/glance-db-create-sbj8x" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.916383 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c","Type":"ContainerStarted","Data":"db6eed3fc6d10a54784aa8e174b01601145622f4f893623dc8e9a364edd4696a"} Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.916441 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c","Type":"ContainerStarted","Data":"f3524c43e622a074d9d28b24e6fac7d7219772488235d0e8f7c96c649884f1f5"} Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.951267 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86c6q\" (UniqueName: \"kubernetes.io/projected/dfcd1056-e001-48cb-9588-9c664ae140a2-kube-api-access-86c6q\") pod \"glance-db-create-sbj8x\" (UID: \"dfcd1056-e001-48cb-9588-9c664ae140a2\") " pod="openstack/glance-db-create-sbj8x" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.967477 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.655360808 podStartE2EDuration="8.967459469s" podCreationTimestamp="2025-10-11 07:54:57 +0000 UTC" firstStartedPulling="2025-10-11 07:54:58.076345214 +0000 UTC m=+885.976801160" lastFinishedPulling="2025-10-11 07:55:05.388443875 +0000 UTC m=+893.288899821" observedRunningTime="2025-10-11 07:55:05.961393453 +0000 UTC m=+893.861849399" watchObservedRunningTime="2025-10-11 07:55:05.967459469 +0000 UTC m=+893.867915415" Oct 11 07:55:05 crc kubenswrapper[5016]: I1011 07:55:05.975179 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86c6q\" (UniqueName: \"kubernetes.io/projected/dfcd1056-e001-48cb-9588-9c664ae140a2-kube-api-access-86c6q\") pod \"glance-db-create-sbj8x\" (UID: \"dfcd1056-e001-48cb-9588-9c664ae140a2\") " pod="openstack/glance-db-create-sbj8x" Oct 11 07:55:06 crc kubenswrapper[5016]: I1011 07:55:06.079768 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sbj8x" Oct 11 07:55:06 crc kubenswrapper[5016]: I1011 07:55:06.316981 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 11 07:55:06 crc kubenswrapper[5016]: I1011 07:55:06.362726 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 11 07:55:06 crc kubenswrapper[5016]: I1011 07:55:06.472173 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-sbj8x"] Oct 11 07:55:06 crc kubenswrapper[5016]: W1011 07:55:06.477173 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfcd1056_e001_48cb_9588_9c664ae140a2.slice/crio-78595aa3878588a86ed170132ea1bd811520d225c04646e0bc49e70d0fc644eb WatchSource:0}: Error finding container 78595aa3878588a86ed170132ea1bd811520d225c04646e0bc49e70d0fc644eb: Status 404 returned error can't find the container with id 78595aa3878588a86ed170132ea1bd811520d225c04646e0bc49e70d0fc644eb Oct 11 07:55:06 crc kubenswrapper[5016]: I1011 07:55:06.927260 5016 generic.go:334] "Generic (PLEG): container finished" podID="dfcd1056-e001-48cb-9588-9c664ae140a2" containerID="db8659dda4eed8d0787579d6eef7a57a1ab40b4533ec07186356b9cb99ab7b55" exitCode=0 Oct 11 07:55:06 crc kubenswrapper[5016]: I1011 07:55:06.927355 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sbj8x" event={"ID":"dfcd1056-e001-48cb-9588-9c664ae140a2","Type":"ContainerDied","Data":"db8659dda4eed8d0787579d6eef7a57a1ab40b4533ec07186356b9cb99ab7b55"} Oct 11 07:55:06 crc kubenswrapper[5016]: I1011 07:55:06.927803 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sbj8x" event={"ID":"dfcd1056-e001-48cb-9588-9c664ae140a2","Type":"ContainerStarted","Data":"78595aa3878588a86ed170132ea1bd811520d225c04646e0bc49e70d0fc644eb"} Oct 11 07:55:06 crc kubenswrapper[5016]: I1011 07:55:06.928416 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 11 07:55:07 crc kubenswrapper[5016]: I1011 07:55:07.122139 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:55:07 crc kubenswrapper[5016]: I1011 07:55:07.122214 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:55:08 crc kubenswrapper[5016]: I1011 07:55:08.274008 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sbj8x" Oct 11 07:55:08 crc kubenswrapper[5016]: I1011 07:55:08.289516 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86c6q\" (UniqueName: \"kubernetes.io/projected/dfcd1056-e001-48cb-9588-9c664ae140a2-kube-api-access-86c6q\") pod \"dfcd1056-e001-48cb-9588-9c664ae140a2\" (UID: \"dfcd1056-e001-48cb-9588-9c664ae140a2\") " Oct 11 07:55:08 crc kubenswrapper[5016]: I1011 07:55:08.294985 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcd1056-e001-48cb-9588-9c664ae140a2-kube-api-access-86c6q" (OuterVolumeSpecName: "kube-api-access-86c6q") pod "dfcd1056-e001-48cb-9588-9c664ae140a2" (UID: "dfcd1056-e001-48cb-9588-9c664ae140a2"). InnerVolumeSpecName "kube-api-access-86c6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:08 crc kubenswrapper[5016]: I1011 07:55:08.391569 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86c6q\" (UniqueName: \"kubernetes.io/projected/dfcd1056-e001-48cb-9588-9c664ae140a2-kube-api-access-86c6q\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:08 crc kubenswrapper[5016]: I1011 07:55:08.945521 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-sbj8x" event={"ID":"dfcd1056-e001-48cb-9588-9c664ae140a2","Type":"ContainerDied","Data":"78595aa3878588a86ed170132ea1bd811520d225c04646e0bc49e70d0fc644eb"} Oct 11 07:55:08 crc kubenswrapper[5016]: I1011 07:55:08.945557 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78595aa3878588a86ed170132ea1bd811520d225c04646e0bc49e70d0fc644eb" Oct 11 07:55:08 crc kubenswrapper[5016]: I1011 07:55:08.945604 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-sbj8x" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.099849 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-lqcqh"] Oct 11 07:55:10 crc kubenswrapper[5016]: E1011 07:55:10.100632 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcd1056-e001-48cb-9588-9c664ae140a2" containerName="mariadb-database-create" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.100656 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcd1056-e001-48cb-9588-9c664ae140a2" containerName="mariadb-database-create" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.100872 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcd1056-e001-48cb-9588-9c664ae140a2" containerName="mariadb-database-create" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.101617 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lqcqh" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.105142 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lqcqh"] Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.119139 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlpb\" (UniqueName: \"kubernetes.io/projected/73eb1774-744a-4bd7-9f6f-dcf7e828bc4e-kube-api-access-bzlpb\") pod \"keystone-db-create-lqcqh\" (UID: \"73eb1774-744a-4bd7-9f6f-dcf7e828bc4e\") " pod="openstack/keystone-db-create-lqcqh" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.220609 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzlpb\" (UniqueName: \"kubernetes.io/projected/73eb1774-744a-4bd7-9f6f-dcf7e828bc4e-kube-api-access-bzlpb\") pod \"keystone-db-create-lqcqh\" (UID: \"73eb1774-744a-4bd7-9f6f-dcf7e828bc4e\") " pod="openstack/keystone-db-create-lqcqh" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.247086 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzlpb\" (UniqueName: \"kubernetes.io/projected/73eb1774-744a-4bd7-9f6f-dcf7e828bc4e-kube-api-access-bzlpb\") pod \"keystone-db-create-lqcqh\" (UID: \"73eb1774-744a-4bd7-9f6f-dcf7e828bc4e\") " pod="openstack/keystone-db-create-lqcqh" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.383502 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6mk28"] Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.384517 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6mk28" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.394193 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6mk28"] Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.417112 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lqcqh" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.423031 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djx4h\" (UniqueName: \"kubernetes.io/projected/754058a4-0d11-41e2-8692-7365db46a03b-kube-api-access-djx4h\") pod \"placement-db-create-6mk28\" (UID: \"754058a4-0d11-41e2-8692-7365db46a03b\") " pod="openstack/placement-db-create-6mk28" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.527559 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djx4h\" (UniqueName: \"kubernetes.io/projected/754058a4-0d11-41e2-8692-7365db46a03b-kube-api-access-djx4h\") pod \"placement-db-create-6mk28\" (UID: \"754058a4-0d11-41e2-8692-7365db46a03b\") " pod="openstack/placement-db-create-6mk28" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.549851 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djx4h\" (UniqueName: \"kubernetes.io/projected/754058a4-0d11-41e2-8692-7365db46a03b-kube-api-access-djx4h\") pod \"placement-db-create-6mk28\" (UID: \"754058a4-0d11-41e2-8692-7365db46a03b\") " pod="openstack/placement-db-create-6mk28" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.711293 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6mk28" Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.923238 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lqcqh"] Oct 11 07:55:10 crc kubenswrapper[5016]: W1011 07:55:10.928045 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73eb1774_744a_4bd7_9f6f_dcf7e828bc4e.slice/crio-a5d2ce3a87e47cdbdc6b37981ddde738c5c40a4384f073869423a94eceb908b9 WatchSource:0}: Error finding container a5d2ce3a87e47cdbdc6b37981ddde738c5c40a4384f073869423a94eceb908b9: Status 404 returned error can't find the container with id a5d2ce3a87e47cdbdc6b37981ddde738c5c40a4384f073869423a94eceb908b9 Oct 11 07:55:10 crc kubenswrapper[5016]: I1011 07:55:10.965876 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lqcqh" event={"ID":"73eb1774-744a-4bd7-9f6f-dcf7e828bc4e","Type":"ContainerStarted","Data":"a5d2ce3a87e47cdbdc6b37981ddde738c5c40a4384f073869423a94eceb908b9"} Oct 11 07:55:11 crc kubenswrapper[5016]: W1011 07:55:11.153042 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod754058a4_0d11_41e2_8692_7365db46a03b.slice/crio-dda4588b1667464e8f3a2f3c135438d7ea4d458e0ae56ecd076b1be6830fe824 WatchSource:0}: Error finding container dda4588b1667464e8f3a2f3c135438d7ea4d458e0ae56ecd076b1be6830fe824: Status 404 returned error can't find the container with id dda4588b1667464e8f3a2f3c135438d7ea4d458e0ae56ecd076b1be6830fe824 Oct 11 07:55:11 crc kubenswrapper[5016]: I1011 07:55:11.165160 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6mk28"] Oct 11 07:55:11 crc kubenswrapper[5016]: I1011 07:55:11.977785 5016 generic.go:334] "Generic (PLEG): container finished" podID="73eb1774-744a-4bd7-9f6f-dcf7e828bc4e" containerID="fed0d6bafad5c985f7d7212339de3dffe347421dc73b13ce21c08de309376438" exitCode=0 Oct 11 07:55:11 crc kubenswrapper[5016]: I1011 07:55:11.977864 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lqcqh" event={"ID":"73eb1774-744a-4bd7-9f6f-dcf7e828bc4e","Type":"ContainerDied","Data":"fed0d6bafad5c985f7d7212339de3dffe347421dc73b13ce21c08de309376438"} Oct 11 07:55:11 crc kubenswrapper[5016]: I1011 07:55:11.980564 5016 generic.go:334] "Generic (PLEG): container finished" podID="754058a4-0d11-41e2-8692-7365db46a03b" containerID="6306ec307114e7fe5459c3b243199e1d9ad702e916aa6df858fe468f0a12a30a" exitCode=0 Oct 11 07:55:11 crc kubenswrapper[5016]: I1011 07:55:11.980627 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6mk28" event={"ID":"754058a4-0d11-41e2-8692-7365db46a03b","Type":"ContainerDied","Data":"6306ec307114e7fe5459c3b243199e1d9ad702e916aa6df858fe468f0a12a30a"} Oct 11 07:55:11 crc kubenswrapper[5016]: I1011 07:55:11.980698 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6mk28" event={"ID":"754058a4-0d11-41e2-8692-7365db46a03b","Type":"ContainerStarted","Data":"dda4588b1667464e8f3a2f3c135438d7ea4d458e0ae56ecd076b1be6830fe824"} Oct 11 07:55:13 crc kubenswrapper[5016]: I1011 07:55:13.358485 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lqcqh" Oct 11 07:55:13 crc kubenswrapper[5016]: I1011 07:55:13.377207 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzlpb\" (UniqueName: \"kubernetes.io/projected/73eb1774-744a-4bd7-9f6f-dcf7e828bc4e-kube-api-access-bzlpb\") pod \"73eb1774-744a-4bd7-9f6f-dcf7e828bc4e\" (UID: \"73eb1774-744a-4bd7-9f6f-dcf7e828bc4e\") " Oct 11 07:55:13 crc kubenswrapper[5016]: I1011 07:55:13.383673 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73eb1774-744a-4bd7-9f6f-dcf7e828bc4e-kube-api-access-bzlpb" (OuterVolumeSpecName: "kube-api-access-bzlpb") pod "73eb1774-744a-4bd7-9f6f-dcf7e828bc4e" (UID: "73eb1774-744a-4bd7-9f6f-dcf7e828bc4e"). InnerVolumeSpecName "kube-api-access-bzlpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:13 crc kubenswrapper[5016]: I1011 07:55:13.464951 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6mk28" Oct 11 07:55:13 crc kubenswrapper[5016]: I1011 07:55:13.482461 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzlpb\" (UniqueName: \"kubernetes.io/projected/73eb1774-744a-4bd7-9f6f-dcf7e828bc4e-kube-api-access-bzlpb\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:13 crc kubenswrapper[5016]: I1011 07:55:13.583167 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djx4h\" (UniqueName: \"kubernetes.io/projected/754058a4-0d11-41e2-8692-7365db46a03b-kube-api-access-djx4h\") pod \"754058a4-0d11-41e2-8692-7365db46a03b\" (UID: \"754058a4-0d11-41e2-8692-7365db46a03b\") " Oct 11 07:55:13 crc kubenswrapper[5016]: I1011 07:55:13.587072 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754058a4-0d11-41e2-8692-7365db46a03b-kube-api-access-djx4h" (OuterVolumeSpecName: "kube-api-access-djx4h") pod "754058a4-0d11-41e2-8692-7365db46a03b" (UID: "754058a4-0d11-41e2-8692-7365db46a03b"). InnerVolumeSpecName "kube-api-access-djx4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:13 crc kubenswrapper[5016]: I1011 07:55:13.685189 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djx4h\" (UniqueName: \"kubernetes.io/projected/754058a4-0d11-41e2-8692-7365db46a03b-kube-api-access-djx4h\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:14 crc kubenswrapper[5016]: I1011 07:55:14.000536 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lqcqh" event={"ID":"73eb1774-744a-4bd7-9f6f-dcf7e828bc4e","Type":"ContainerDied","Data":"a5d2ce3a87e47cdbdc6b37981ddde738c5c40a4384f073869423a94eceb908b9"} Oct 11 07:55:14 crc kubenswrapper[5016]: I1011 07:55:14.001049 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5d2ce3a87e47cdbdc6b37981ddde738c5c40a4384f073869423a94eceb908b9" Oct 11 07:55:14 crc kubenswrapper[5016]: I1011 07:55:14.000591 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lqcqh" Oct 11 07:55:14 crc kubenswrapper[5016]: I1011 07:55:14.002669 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6mk28" event={"ID":"754058a4-0d11-41e2-8692-7365db46a03b","Type":"ContainerDied","Data":"dda4588b1667464e8f3a2f3c135438d7ea4d458e0ae56ecd076b1be6830fe824"} Oct 11 07:55:14 crc kubenswrapper[5016]: I1011 07:55:14.002708 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dda4588b1667464e8f3a2f3c135438d7ea4d458e0ae56ecd076b1be6830fe824" Oct 11 07:55:14 crc kubenswrapper[5016]: I1011 07:55:14.002767 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6mk28" Oct 11 07:55:15 crc kubenswrapper[5016]: I1011 07:55:15.828298 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f912-account-create-kmrpl"] Oct 11 07:55:15 crc kubenswrapper[5016]: E1011 07:55:15.829380 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73eb1774-744a-4bd7-9f6f-dcf7e828bc4e" containerName="mariadb-database-create" Oct 11 07:55:15 crc kubenswrapper[5016]: I1011 07:55:15.829397 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="73eb1774-744a-4bd7-9f6f-dcf7e828bc4e" containerName="mariadb-database-create" Oct 11 07:55:15 crc kubenswrapper[5016]: E1011 07:55:15.829432 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754058a4-0d11-41e2-8692-7365db46a03b" containerName="mariadb-database-create" Oct 11 07:55:15 crc kubenswrapper[5016]: I1011 07:55:15.829439 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="754058a4-0d11-41e2-8692-7365db46a03b" containerName="mariadb-database-create" Oct 11 07:55:15 crc kubenswrapper[5016]: I1011 07:55:15.829672 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="73eb1774-744a-4bd7-9f6f-dcf7e828bc4e" containerName="mariadb-database-create" Oct 11 07:55:15 crc kubenswrapper[5016]: I1011 07:55:15.829691 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="754058a4-0d11-41e2-8692-7365db46a03b" containerName="mariadb-database-create" Oct 11 07:55:15 crc kubenswrapper[5016]: I1011 07:55:15.830547 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f912-account-create-kmrpl" Oct 11 07:55:15 crc kubenswrapper[5016]: I1011 07:55:15.832987 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 11 07:55:15 crc kubenswrapper[5016]: I1011 07:55:15.837735 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f912-account-create-kmrpl"] Oct 11 07:55:15 crc kubenswrapper[5016]: I1011 07:55:15.927284 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh4b4\" (UniqueName: \"kubernetes.io/projected/5a4c8ff5-2303-4034-a5ed-79f3a55c0e09-kube-api-access-xh4b4\") pod \"glance-f912-account-create-kmrpl\" (UID: \"5a4c8ff5-2303-4034-a5ed-79f3a55c0e09\") " pod="openstack/glance-f912-account-create-kmrpl" Oct 11 07:55:16 crc kubenswrapper[5016]: I1011 07:55:16.028841 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh4b4\" (UniqueName: \"kubernetes.io/projected/5a4c8ff5-2303-4034-a5ed-79f3a55c0e09-kube-api-access-xh4b4\") pod \"glance-f912-account-create-kmrpl\" (UID: \"5a4c8ff5-2303-4034-a5ed-79f3a55c0e09\") " pod="openstack/glance-f912-account-create-kmrpl" Oct 11 07:55:16 crc kubenswrapper[5016]: I1011 07:55:16.051325 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh4b4\" (UniqueName: \"kubernetes.io/projected/5a4c8ff5-2303-4034-a5ed-79f3a55c0e09-kube-api-access-xh4b4\") pod \"glance-f912-account-create-kmrpl\" (UID: \"5a4c8ff5-2303-4034-a5ed-79f3a55c0e09\") " pod="openstack/glance-f912-account-create-kmrpl" Oct 11 07:55:16 crc kubenswrapper[5016]: I1011 07:55:16.209358 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f912-account-create-kmrpl" Oct 11 07:55:16 crc kubenswrapper[5016]: I1011 07:55:16.654280 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f912-account-create-kmrpl"] Oct 11 07:55:16 crc kubenswrapper[5016]: W1011 07:55:16.664811 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a4c8ff5_2303_4034_a5ed_79f3a55c0e09.slice/crio-bdf91b8b34a962b40c9632ad3b7dabe6a56febd7c739a7facd995abd5af8cb76 WatchSource:0}: Error finding container bdf91b8b34a962b40c9632ad3b7dabe6a56febd7c739a7facd995abd5af8cb76: Status 404 returned error can't find the container with id bdf91b8b34a962b40c9632ad3b7dabe6a56febd7c739a7facd995abd5af8cb76 Oct 11 07:55:17 crc kubenswrapper[5016]: I1011 07:55:17.030169 5016 generic.go:334] "Generic (PLEG): container finished" podID="5a4c8ff5-2303-4034-a5ed-79f3a55c0e09" containerID="7182ffebc9f6564ed5079321d2e3b3816b9e139ec3a4b838cf34c49fbd56b9e4" exitCode=0 Oct 11 07:55:17 crc kubenswrapper[5016]: I1011 07:55:17.030535 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f912-account-create-kmrpl" event={"ID":"5a4c8ff5-2303-4034-a5ed-79f3a55c0e09","Type":"ContainerDied","Data":"7182ffebc9f6564ed5079321d2e3b3816b9e139ec3a4b838cf34c49fbd56b9e4"} Oct 11 07:55:17 crc kubenswrapper[5016]: I1011 07:55:17.030893 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f912-account-create-kmrpl" event={"ID":"5a4c8ff5-2303-4034-a5ed-79f3a55c0e09","Type":"ContainerStarted","Data":"bdf91b8b34a962b40c9632ad3b7dabe6a56febd7c739a7facd995abd5af8cb76"} Oct 11 07:55:17 crc kubenswrapper[5016]: I1011 07:55:17.680006 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 11 07:55:18 crc kubenswrapper[5016]: I1011 07:55:18.351845 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f912-account-create-kmrpl" Oct 11 07:55:18 crc kubenswrapper[5016]: I1011 07:55:18.469950 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh4b4\" (UniqueName: \"kubernetes.io/projected/5a4c8ff5-2303-4034-a5ed-79f3a55c0e09-kube-api-access-xh4b4\") pod \"5a4c8ff5-2303-4034-a5ed-79f3a55c0e09\" (UID: \"5a4c8ff5-2303-4034-a5ed-79f3a55c0e09\") " Oct 11 07:55:18 crc kubenswrapper[5016]: I1011 07:55:18.475595 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4c8ff5-2303-4034-a5ed-79f3a55c0e09-kube-api-access-xh4b4" (OuterVolumeSpecName: "kube-api-access-xh4b4") pod "5a4c8ff5-2303-4034-a5ed-79f3a55c0e09" (UID: "5a4c8ff5-2303-4034-a5ed-79f3a55c0e09"). InnerVolumeSpecName "kube-api-access-xh4b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:18 crc kubenswrapper[5016]: I1011 07:55:18.572719 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh4b4\" (UniqueName: \"kubernetes.io/projected/5a4c8ff5-2303-4034-a5ed-79f3a55c0e09-kube-api-access-xh4b4\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:19 crc kubenswrapper[5016]: I1011 07:55:19.047359 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f912-account-create-kmrpl" event={"ID":"5a4c8ff5-2303-4034-a5ed-79f3a55c0e09","Type":"ContainerDied","Data":"bdf91b8b34a962b40c9632ad3b7dabe6a56febd7c739a7facd995abd5af8cb76"} Oct 11 07:55:19 crc kubenswrapper[5016]: I1011 07:55:19.047401 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdf91b8b34a962b40c9632ad3b7dabe6a56febd7c739a7facd995abd5af8cb76" Oct 11 07:55:19 crc kubenswrapper[5016]: I1011 07:55:19.047464 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f912-account-create-kmrpl" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.255284 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e8e4-account-create-wl7gf"] Oct 11 07:55:20 crc kubenswrapper[5016]: E1011 07:55:20.256150 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4c8ff5-2303-4034-a5ed-79f3a55c0e09" containerName="mariadb-account-create" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.256166 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4c8ff5-2303-4034-a5ed-79f3a55c0e09" containerName="mariadb-account-create" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.256379 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4c8ff5-2303-4034-a5ed-79f3a55c0e09" containerName="mariadb-account-create" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.257065 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e8e4-account-create-wl7gf" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.265912 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.268416 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e8e4-account-create-wl7gf"] Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.304899 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2qq\" (UniqueName: \"kubernetes.io/projected/d18eb638-3529-417c-a8e1-e95af0025640-kube-api-access-5z2qq\") pod \"keystone-e8e4-account-create-wl7gf\" (UID: \"d18eb638-3529-417c-a8e1-e95af0025640\") " pod="openstack/keystone-e8e4-account-create-wl7gf" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.406226 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2qq\" (UniqueName: \"kubernetes.io/projected/d18eb638-3529-417c-a8e1-e95af0025640-kube-api-access-5z2qq\") pod \"keystone-e8e4-account-create-wl7gf\" (UID: \"d18eb638-3529-417c-a8e1-e95af0025640\") " pod="openstack/keystone-e8e4-account-create-wl7gf" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.428133 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2qq\" (UniqueName: \"kubernetes.io/projected/d18eb638-3529-417c-a8e1-e95af0025640-kube-api-access-5z2qq\") pod \"keystone-e8e4-account-create-wl7gf\" (UID: \"d18eb638-3529-417c-a8e1-e95af0025640\") " pod="openstack/keystone-e8e4-account-create-wl7gf" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.550262 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-073c-account-create-9swf8"] Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.552245 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-073c-account-create-9swf8" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.554614 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.561412 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-073c-account-create-9swf8"] Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.579069 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e8e4-account-create-wl7gf" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.609496 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wvrq\" (UniqueName: \"kubernetes.io/projected/0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7-kube-api-access-8wvrq\") pod \"placement-073c-account-create-9swf8\" (UID: \"0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7\") " pod="openstack/placement-073c-account-create-9swf8" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.710646 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wvrq\" (UniqueName: \"kubernetes.io/projected/0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7-kube-api-access-8wvrq\") pod \"placement-073c-account-create-9swf8\" (UID: \"0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7\") " pod="openstack/placement-073c-account-create-9swf8" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.746942 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wvrq\" (UniqueName: \"kubernetes.io/projected/0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7-kube-api-access-8wvrq\") pod \"placement-073c-account-create-9swf8\" (UID: \"0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7\") " pod="openstack/placement-073c-account-create-9swf8" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.881723 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-073c-account-create-9swf8" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.885388 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wnxjs"] Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.886346 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.891892 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.892102 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-l2mcw" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.902814 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wnxjs"] Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.915478 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-db-sync-config-data\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.915531 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-combined-ca-bundle\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.915564 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-config-data\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:20 crc kubenswrapper[5016]: I1011 07:55:20.915598 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmqd\" (UniqueName: \"kubernetes.io/projected/c5718a79-8ed4-45db-bcc0-f11946055cc0-kube-api-access-jqmqd\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.010811 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e8e4-account-create-wl7gf"] Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.017470 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-db-sync-config-data\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.017515 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-combined-ca-bundle\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.017536 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-config-data\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.017561 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqmqd\" (UniqueName: \"kubernetes.io/projected/c5718a79-8ed4-45db-bcc0-f11946055cc0-kube-api-access-jqmqd\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.022304 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-db-sync-config-data\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.030163 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-config-data\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.030211 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-combined-ca-bundle\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.037535 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqmqd\" (UniqueName: \"kubernetes.io/projected/c5718a79-8ed4-45db-bcc0-f11946055cc0-kube-api-access-jqmqd\") pod \"glance-db-sync-wnxjs\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.073715 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e8e4-account-create-wl7gf" event={"ID":"d18eb638-3529-417c-a8e1-e95af0025640","Type":"ContainerStarted","Data":"ffda420ee4980e9ab3c126cdf17476abda67fc803572c43da5fb5bbeb01a0768"} Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.100141 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-db7s5" podUID="69f3f361-bd63-4b18-afd7-3c64169af0a8" containerName="ovn-controller" probeResult="failure" output=< Oct 11 07:55:21 crc kubenswrapper[5016]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 11 07:55:21 crc kubenswrapper[5016]: > Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.108984 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.118862 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w5nkt" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.270185 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.313734 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-073c-account-create-9swf8"] Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.332171 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-db7s5-config-p9lfm"] Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.333109 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: W1011 07:55:21.334639 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0427c1c7_53f5_4ea8_a0d2_fd33f27aa5b7.slice/crio-5a7c748de4c91e9f649e6fe1117e13f6ce83cbffa65a0f5ffdae23891eec79dc WatchSource:0}: Error finding container 5a7c748de4c91e9f649e6fe1117e13f6ce83cbffa65a0f5ffdae23891eec79dc: Status 404 returned error can't find the container with id 5a7c748de4c91e9f649e6fe1117e13f6ce83cbffa65a0f5ffdae23891eec79dc Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.337789 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.385587 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-db7s5-config-p9lfm"] Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.424619 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-log-ovn\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.425957 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run-ovn\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.426050 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-scripts\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.426141 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.426311 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvjz\" (UniqueName: \"kubernetes.io/projected/9de5adc6-a820-4cad-93a4-b7e2c3625262-kube-api-access-dsvjz\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.426413 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-additional-scripts\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.530267 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsvjz\" (UniqueName: \"kubernetes.io/projected/9de5adc6-a820-4cad-93a4-b7e2c3625262-kube-api-access-dsvjz\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.530315 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-additional-scripts\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.530358 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-log-ovn\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.530404 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run-ovn\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.530431 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-scripts\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.530466 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.530703 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run-ovn\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.530718 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.530843 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-log-ovn\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.531406 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-additional-scripts\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.532377 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-scripts\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.552753 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsvjz\" (UniqueName: \"kubernetes.io/projected/9de5adc6-a820-4cad-93a4-b7e2c3625262-kube-api-access-dsvjz\") pod \"ovn-controller-db7s5-config-p9lfm\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.674372 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:21 crc kubenswrapper[5016]: I1011 07:55:21.823925 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wnxjs"] Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.084547 5016 generic.go:334] "Generic (PLEG): container finished" podID="bae29196-1d16-4563-9e7d-0981a96a352f" containerID="3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c" exitCode=0 Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.084672 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bae29196-1d16-4563-9e7d-0981a96a352f","Type":"ContainerDied","Data":"3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c"} Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.086526 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wnxjs" event={"ID":"c5718a79-8ed4-45db-bcc0-f11946055cc0","Type":"ContainerStarted","Data":"3a3d6f35ee81994fa56badddcd2bc2e98e618e5f6bd74ec2faad5d0298b359b9"} Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.093028 5016 generic.go:334] "Generic (PLEG): container finished" podID="0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7" containerID="00ab27166a916c7cded3f6bbe9f7c9eebdae8d377b70503fb0f38b6f16a3ad97" exitCode=0 Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.093152 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-073c-account-create-9swf8" event={"ID":"0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7","Type":"ContainerDied","Data":"00ab27166a916c7cded3f6bbe9f7c9eebdae8d377b70503fb0f38b6f16a3ad97"} Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.093184 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-073c-account-create-9swf8" event={"ID":"0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7","Type":"ContainerStarted","Data":"5a7c748de4c91e9f649e6fe1117e13f6ce83cbffa65a0f5ffdae23891eec79dc"} Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.095170 5016 generic.go:334] "Generic (PLEG): container finished" podID="d18eb638-3529-417c-a8e1-e95af0025640" containerID="dc3b9aba37b812e9adbc72c37c1633f4dd94e95327f3f40832c0de8ef91d5a40" exitCode=0 Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.095235 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e8e4-account-create-wl7gf" event={"ID":"d18eb638-3529-417c-a8e1-e95af0025640","Type":"ContainerDied","Data":"dc3b9aba37b812e9adbc72c37c1633f4dd94e95327f3f40832c0de8ef91d5a40"} Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.096936 5016 generic.go:334] "Generic (PLEG): container finished" podID="67a018eb-911e-4491-9dae-a1dfb3172e05" containerID="eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad" exitCode=0 Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.097068 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67a018eb-911e-4491-9dae-a1dfb3172e05","Type":"ContainerDied","Data":"eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad"} Oct 11 07:55:22 crc kubenswrapper[5016]: I1011 07:55:22.134575 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-db7s5-config-p9lfm"] Oct 11 07:55:22 crc kubenswrapper[5016]: W1011 07:55:22.138581 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9de5adc6_a820_4cad_93a4_b7e2c3625262.slice/crio-0d973a18d5c415cd9d5c5332bbec84f546b2654e6ba628fe041846657ad6b7a4 WatchSource:0}: Error finding container 0d973a18d5c415cd9d5c5332bbec84f546b2654e6ba628fe041846657ad6b7a4: Status 404 returned error can't find the container with id 0d973a18d5c415cd9d5c5332bbec84f546b2654e6ba628fe041846657ad6b7a4 Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.106603 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67a018eb-911e-4491-9dae-a1dfb3172e05","Type":"ContainerStarted","Data":"9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1"} Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.107066 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.108882 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bae29196-1d16-4563-9e7d-0981a96a352f","Type":"ContainerStarted","Data":"68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1"} Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.109365 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.112560 5016 generic.go:334] "Generic (PLEG): container finished" podID="9de5adc6-a820-4cad-93a4-b7e2c3625262" containerID="5342842da9c803c984b029f42bd27bd9d76ed7f9e154d5bc6f083a318c90ddbe" exitCode=0 Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.112847 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-db7s5-config-p9lfm" event={"ID":"9de5adc6-a820-4cad-93a4-b7e2c3625262","Type":"ContainerDied","Data":"5342842da9c803c984b029f42bd27bd9d76ed7f9e154d5bc6f083a318c90ddbe"} Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.112935 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-db7s5-config-p9lfm" event={"ID":"9de5adc6-a820-4cad-93a4-b7e2c3625262","Type":"ContainerStarted","Data":"0d973a18d5c415cd9d5c5332bbec84f546b2654e6ba628fe041846657ad6b7a4"} Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.132704 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=48.76239111 podStartE2EDuration="58.132646887s" podCreationTimestamp="2025-10-11 07:54:25 +0000 UTC" firstStartedPulling="2025-10-11 07:54:37.957004896 +0000 UTC m=+865.857460842" lastFinishedPulling="2025-10-11 07:54:47.327260663 +0000 UTC m=+875.227716619" observedRunningTime="2025-10-11 07:55:23.132431121 +0000 UTC m=+911.032887157" watchObservedRunningTime="2025-10-11 07:55:23.132646887 +0000 UTC m=+911.033102833" Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.167312 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.818993654 podStartE2EDuration="57.167297816s" podCreationTimestamp="2025-10-11 07:54:26 +0000 UTC" firstStartedPulling="2025-10-11 07:54:38.056903322 +0000 UTC m=+865.957359268" lastFinishedPulling="2025-10-11 07:54:48.405207484 +0000 UTC m=+876.305663430" observedRunningTime="2025-10-11 07:55:23.164171922 +0000 UTC m=+911.064627868" watchObservedRunningTime="2025-10-11 07:55:23.167297816 +0000 UTC m=+911.067753762" Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.525358 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e8e4-account-create-wl7gf" Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.534254 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-073c-account-create-9swf8" Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.573959 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wvrq\" (UniqueName: \"kubernetes.io/projected/0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7-kube-api-access-8wvrq\") pod \"0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7\" (UID: \"0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7\") " Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.574221 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z2qq\" (UniqueName: \"kubernetes.io/projected/d18eb638-3529-417c-a8e1-e95af0025640-kube-api-access-5z2qq\") pod \"d18eb638-3529-417c-a8e1-e95af0025640\" (UID: \"d18eb638-3529-417c-a8e1-e95af0025640\") " Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.579793 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7-kube-api-access-8wvrq" (OuterVolumeSpecName: "kube-api-access-8wvrq") pod "0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7" (UID: "0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7"). InnerVolumeSpecName "kube-api-access-8wvrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.580356 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18eb638-3529-417c-a8e1-e95af0025640-kube-api-access-5z2qq" (OuterVolumeSpecName: "kube-api-access-5z2qq") pod "d18eb638-3529-417c-a8e1-e95af0025640" (UID: "d18eb638-3529-417c-a8e1-e95af0025640"). InnerVolumeSpecName "kube-api-access-5z2qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.675812 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wvrq\" (UniqueName: \"kubernetes.io/projected/0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7-kube-api-access-8wvrq\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:23 crc kubenswrapper[5016]: I1011 07:55:23.676107 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z2qq\" (UniqueName: \"kubernetes.io/projected/d18eb638-3529-417c-a8e1-e95af0025640-kube-api-access-5z2qq\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.143171 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-073c-account-create-9swf8" event={"ID":"0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7","Type":"ContainerDied","Data":"5a7c748de4c91e9f649e6fe1117e13f6ce83cbffa65a0f5ffdae23891eec79dc"} Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.143212 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a7c748de4c91e9f649e6fe1117e13f6ce83cbffa65a0f5ffdae23891eec79dc" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.143266 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-073c-account-create-9swf8" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.159132 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e8e4-account-create-wl7gf" event={"ID":"d18eb638-3529-417c-a8e1-e95af0025640","Type":"ContainerDied","Data":"ffda420ee4980e9ab3c126cdf17476abda67fc803572c43da5fb5bbeb01a0768"} Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.159186 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffda420ee4980e9ab3c126cdf17476abda67fc803572c43da5fb5bbeb01a0768" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.159265 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e8e4-account-create-wl7gf" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.439873 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.588997 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run\") pod \"9de5adc6-a820-4cad-93a4-b7e2c3625262\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589085 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run-ovn\") pod \"9de5adc6-a820-4cad-93a4-b7e2c3625262\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589118 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-additional-scripts\") pod \"9de5adc6-a820-4cad-93a4-b7e2c3625262\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589098 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run" (OuterVolumeSpecName: "var-run") pod "9de5adc6-a820-4cad-93a4-b7e2c3625262" (UID: "9de5adc6-a820-4cad-93a4-b7e2c3625262"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589128 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9de5adc6-a820-4cad-93a4-b7e2c3625262" (UID: "9de5adc6-a820-4cad-93a4-b7e2c3625262"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589150 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-log-ovn\") pod \"9de5adc6-a820-4cad-93a4-b7e2c3625262\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589187 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9de5adc6-a820-4cad-93a4-b7e2c3625262" (UID: "9de5adc6-a820-4cad-93a4-b7e2c3625262"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589305 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-scripts\") pod \"9de5adc6-a820-4cad-93a4-b7e2c3625262\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589339 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsvjz\" (UniqueName: \"kubernetes.io/projected/9de5adc6-a820-4cad-93a4-b7e2c3625262-kube-api-access-dsvjz\") pod \"9de5adc6-a820-4cad-93a4-b7e2c3625262\" (UID: \"9de5adc6-a820-4cad-93a4-b7e2c3625262\") " Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589960 5016 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589977 5016 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.589987 5016 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9de5adc6-a820-4cad-93a4-b7e2c3625262-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.590000 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9de5adc6-a820-4cad-93a4-b7e2c3625262" (UID: "9de5adc6-a820-4cad-93a4-b7e2c3625262"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.590368 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-scripts" (OuterVolumeSpecName: "scripts") pod "9de5adc6-a820-4cad-93a4-b7e2c3625262" (UID: "9de5adc6-a820-4cad-93a4-b7e2c3625262"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.595300 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9de5adc6-a820-4cad-93a4-b7e2c3625262-kube-api-access-dsvjz" (OuterVolumeSpecName: "kube-api-access-dsvjz") pod "9de5adc6-a820-4cad-93a4-b7e2c3625262" (UID: "9de5adc6-a820-4cad-93a4-b7e2c3625262"). InnerVolumeSpecName "kube-api-access-dsvjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.691461 5016 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.691499 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9de5adc6-a820-4cad-93a4-b7e2c3625262-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:24 crc kubenswrapper[5016]: I1011 07:55:24.691512 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsvjz\" (UniqueName: \"kubernetes.io/projected/9de5adc6-a820-4cad-93a4-b7e2c3625262-kube-api-access-dsvjz\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:25 crc kubenswrapper[5016]: I1011 07:55:25.167322 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-db7s5-config-p9lfm" event={"ID":"9de5adc6-a820-4cad-93a4-b7e2c3625262","Type":"ContainerDied","Data":"0d973a18d5c415cd9d5c5332bbec84f546b2654e6ba628fe041846657ad6b7a4"} Oct 11 07:55:25 crc kubenswrapper[5016]: I1011 07:55:25.167358 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d973a18d5c415cd9d5c5332bbec84f546b2654e6ba628fe041846657ad6b7a4" Oct 11 07:55:25 crc kubenswrapper[5016]: I1011 07:55:25.168542 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-db7s5-config-p9lfm" Oct 11 07:55:25 crc kubenswrapper[5016]: I1011 07:55:25.542759 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-db7s5-config-p9lfm"] Oct 11 07:55:25 crc kubenswrapper[5016]: I1011 07:55:25.548020 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-db7s5-config-p9lfm"] Oct 11 07:55:26 crc kubenswrapper[5016]: I1011 07:55:26.169783 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-db7s5" Oct 11 07:55:27 crc kubenswrapper[5016]: I1011 07:55:27.173224 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9de5adc6-a820-4cad-93a4-b7e2c3625262" path="/var/lib/kubelet/pods/9de5adc6-a820-4cad-93a4-b7e2c3625262/volumes" Oct 11 07:55:34 crc kubenswrapper[5016]: I1011 07:55:34.257753 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wnxjs" event={"ID":"c5718a79-8ed4-45db-bcc0-f11946055cc0","Type":"ContainerStarted","Data":"c0fc3d0a66d32bf0d78d52a2e85007d333a797aa0fec504d4d9603461f524393"} Oct 11 07:55:34 crc kubenswrapper[5016]: I1011 07:55:34.288200 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wnxjs" podStartSLOduration=3.166306938 podStartE2EDuration="14.288123974s" podCreationTimestamp="2025-10-11 07:55:20 +0000 UTC" firstStartedPulling="2025-10-11 07:55:21.885861443 +0000 UTC m=+909.786317389" lastFinishedPulling="2025-10-11 07:55:33.007678479 +0000 UTC m=+920.908134425" observedRunningTime="2025-10-11 07:55:34.273118963 +0000 UTC m=+922.173574909" watchObservedRunningTime="2025-10-11 07:55:34.288123974 +0000 UTC m=+922.188579950" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.123154 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.123989 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.124098 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.125413 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"265caf0315ed7d9cc490abb97692bb40c37bc7e9af0dd0d10a990157231f7f84"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.125563 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://265caf0315ed7d9cc490abb97692bb40c37bc7e9af0dd0d10a990157231f7f84" gracePeriod=600 Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.274461 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.297351 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="265caf0315ed7d9cc490abb97692bb40c37bc7e9af0dd0d10a990157231f7f84" exitCode=0 Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.297416 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"265caf0315ed7d9cc490abb97692bb40c37bc7e9af0dd0d10a990157231f7f84"} Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.297469 5016 scope.go:117] "RemoveContainer" containerID="9e47e6adcac812a126122f7057fc0b9abd8d456e8565df449156e69a78cd7a4b" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.558317 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.702352 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-66h42"] Oct 11 07:55:37 crc kubenswrapper[5016]: E1011 07:55:37.702648 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18eb638-3529-417c-a8e1-e95af0025640" containerName="mariadb-account-create" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.702664 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18eb638-3529-417c-a8e1-e95af0025640" containerName="mariadb-account-create" Oct 11 07:55:37 crc kubenswrapper[5016]: E1011 07:55:37.702696 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7" containerName="mariadb-account-create" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.702702 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7" containerName="mariadb-account-create" Oct 11 07:55:37 crc kubenswrapper[5016]: E1011 07:55:37.702719 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9de5adc6-a820-4cad-93a4-b7e2c3625262" containerName="ovn-config" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.702725 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="9de5adc6-a820-4cad-93a4-b7e2c3625262" containerName="ovn-config" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.702870 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18eb638-3529-417c-a8e1-e95af0025640" containerName="mariadb-account-create" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.702889 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7" containerName="mariadb-account-create" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.702901 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="9de5adc6-a820-4cad-93a4-b7e2c3625262" containerName="ovn-config" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.703380 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-66h42" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.711765 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-66h42"] Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.817800 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-b7lpm"] Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.818793 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b7lpm" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.832598 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-b7lpm"] Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.838496 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmcvq\" (UniqueName: \"kubernetes.io/projected/cf48b893-4872-446b-9e65-d7f16bd21b40-kube-api-access-zmcvq\") pod \"cinder-db-create-66h42\" (UID: \"cf48b893-4872-446b-9e65-d7f16bd21b40\") " pod="openstack/cinder-db-create-66h42" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.908224 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-zvgbq"] Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.909166 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zvgbq" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.921025 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zvgbq"] Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.940294 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh6nq\" (UniqueName: \"kubernetes.io/projected/7c2cbade-3503-443a-93b5-17e53e532a6c-kube-api-access-bh6nq\") pod \"barbican-db-create-b7lpm\" (UID: \"7c2cbade-3503-443a-93b5-17e53e532a6c\") " pod="openstack/barbican-db-create-b7lpm" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.940374 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmcvq\" (UniqueName: \"kubernetes.io/projected/cf48b893-4872-446b-9e65-d7f16bd21b40-kube-api-access-zmcvq\") pod \"cinder-db-create-66h42\" (UID: \"cf48b893-4872-446b-9e65-d7f16bd21b40\") " pod="openstack/cinder-db-create-66h42" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.964435 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmcvq\" (UniqueName: \"kubernetes.io/projected/cf48b893-4872-446b-9e65-d7f16bd21b40-kube-api-access-zmcvq\") pod \"cinder-db-create-66h42\" (UID: \"cf48b893-4872-446b-9e65-d7f16bd21b40\") " pod="openstack/cinder-db-create-66h42" Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.997222 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qjqc4"] Oct 11 07:55:37 crc kubenswrapper[5016]: I1011 07:55:37.998374 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.000046 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.000607 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zk98n" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.000795 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.001174 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.016203 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qjqc4"] Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.019283 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-66h42" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.044646 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6g5g\" (UniqueName: \"kubernetes.io/projected/d8a3bd97-03dd-4dcb-9538-76eba3893f60-kube-api-access-p6g5g\") pod \"neutron-db-create-zvgbq\" (UID: \"d8a3bd97-03dd-4dcb-9538-76eba3893f60\") " pod="openstack/neutron-db-create-zvgbq" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.044748 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh6nq\" (UniqueName: \"kubernetes.io/projected/7c2cbade-3503-443a-93b5-17e53e532a6c-kube-api-access-bh6nq\") pod \"barbican-db-create-b7lpm\" (UID: \"7c2cbade-3503-443a-93b5-17e53e532a6c\") " pod="openstack/barbican-db-create-b7lpm" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.065330 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh6nq\" (UniqueName: \"kubernetes.io/projected/7c2cbade-3503-443a-93b5-17e53e532a6c-kube-api-access-bh6nq\") pod \"barbican-db-create-b7lpm\" (UID: \"7c2cbade-3503-443a-93b5-17e53e532a6c\") " pod="openstack/barbican-db-create-b7lpm" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.133310 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b7lpm" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.145848 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-combined-ca-bundle\") pod \"keystone-db-sync-qjqc4\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.146339 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnqvl\" (UniqueName: \"kubernetes.io/projected/0af02720-0f53-4774-b530-4fb491f32429-kube-api-access-jnqvl\") pod \"keystone-db-sync-qjqc4\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.146394 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-config-data\") pod \"keystone-db-sync-qjqc4\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.146446 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6g5g\" (UniqueName: \"kubernetes.io/projected/d8a3bd97-03dd-4dcb-9538-76eba3893f60-kube-api-access-p6g5g\") pod \"neutron-db-create-zvgbq\" (UID: \"d8a3bd97-03dd-4dcb-9538-76eba3893f60\") " pod="openstack/neutron-db-create-zvgbq" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.166510 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6g5g\" (UniqueName: \"kubernetes.io/projected/d8a3bd97-03dd-4dcb-9538-76eba3893f60-kube-api-access-p6g5g\") pod \"neutron-db-create-zvgbq\" (UID: \"d8a3bd97-03dd-4dcb-9538-76eba3893f60\") " pod="openstack/neutron-db-create-zvgbq" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.222725 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zvgbq" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.248662 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-combined-ca-bundle\") pod \"keystone-db-sync-qjqc4\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.248802 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnqvl\" (UniqueName: \"kubernetes.io/projected/0af02720-0f53-4774-b530-4fb491f32429-kube-api-access-jnqvl\") pod \"keystone-db-sync-qjqc4\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.248865 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-config-data\") pod \"keystone-db-sync-qjqc4\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.253177 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-combined-ca-bundle\") pod \"keystone-db-sync-qjqc4\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.256266 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-config-data\") pod \"keystone-db-sync-qjqc4\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.287955 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnqvl\" (UniqueName: \"kubernetes.io/projected/0af02720-0f53-4774-b530-4fb491f32429-kube-api-access-jnqvl\") pod \"keystone-db-sync-qjqc4\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.319942 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.331644 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"e0beaf8f3888f3224e77b273d2e7d0fa1af0b12ba8a490fbd46da42f1ed82abe"} Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.482904 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-66h42"] Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.677037 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-b7lpm"] Oct 11 07:55:38 crc kubenswrapper[5016]: W1011 07:55:38.680149 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c2cbade_3503_443a_93b5_17e53e532a6c.slice/crio-28d37e5ca5e1e475cad7f0a8c2288e4116ea449e26a8a034bd0056b461d4c590 WatchSource:0}: Error finding container 28d37e5ca5e1e475cad7f0a8c2288e4116ea449e26a8a034bd0056b461d4c590: Status 404 returned error can't find the container with id 28d37e5ca5e1e475cad7f0a8c2288e4116ea449e26a8a034bd0056b461d4c590 Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.785910 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zvgbq"] Oct 11 07:55:38 crc kubenswrapper[5016]: W1011 07:55:38.790422 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8a3bd97_03dd_4dcb_9538_76eba3893f60.slice/crio-18abbeea608c529f418688e2e8730dda809096a8cd38cde9fc93bd5ca212e89c WatchSource:0}: Error finding container 18abbeea608c529f418688e2e8730dda809096a8cd38cde9fc93bd5ca212e89c: Status 404 returned error can't find the container with id 18abbeea608c529f418688e2e8730dda809096a8cd38cde9fc93bd5ca212e89c Oct 11 07:55:38 crc kubenswrapper[5016]: W1011 07:55:38.886768 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0af02720_0f53_4774_b530_4fb491f32429.slice/crio-09a9b45bd672baab6f9a53490d3aed9ff633ea122a3006acaace1ec60f6da020 WatchSource:0}: Error finding container 09a9b45bd672baab6f9a53490d3aed9ff633ea122a3006acaace1ec60f6da020: Status 404 returned error can't find the container with id 09a9b45bd672baab6f9a53490d3aed9ff633ea122a3006acaace1ec60f6da020 Oct 11 07:55:38 crc kubenswrapper[5016]: I1011 07:55:38.891613 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qjqc4"] Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.351473 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qjqc4" event={"ID":"0af02720-0f53-4774-b530-4fb491f32429","Type":"ContainerStarted","Data":"09a9b45bd672baab6f9a53490d3aed9ff633ea122a3006acaace1ec60f6da020"} Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.356596 5016 generic.go:334] "Generic (PLEG): container finished" podID="7c2cbade-3503-443a-93b5-17e53e532a6c" containerID="a886423cdd9bcd98de816e9f95a0d23846ee88f4c02c4e9fc7626c387bede0db" exitCode=0 Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.357136 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b7lpm" event={"ID":"7c2cbade-3503-443a-93b5-17e53e532a6c","Type":"ContainerDied","Data":"a886423cdd9bcd98de816e9f95a0d23846ee88f4c02c4e9fc7626c387bede0db"} Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.357175 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b7lpm" event={"ID":"7c2cbade-3503-443a-93b5-17e53e532a6c","Type":"ContainerStarted","Data":"28d37e5ca5e1e475cad7f0a8c2288e4116ea449e26a8a034bd0056b461d4c590"} Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.368764 5016 generic.go:334] "Generic (PLEG): container finished" podID="cf48b893-4872-446b-9e65-d7f16bd21b40" containerID="50805f5ac2adb2446075235e3c85e2775c2ce3a94b2b04889f3295edebd66a41" exitCode=0 Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.368835 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-66h42" event={"ID":"cf48b893-4872-446b-9e65-d7f16bd21b40","Type":"ContainerDied","Data":"50805f5ac2adb2446075235e3c85e2775c2ce3a94b2b04889f3295edebd66a41"} Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.368861 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-66h42" event={"ID":"cf48b893-4872-446b-9e65-d7f16bd21b40","Type":"ContainerStarted","Data":"1387c2aa2c27373f4d5fa4c7e037c4ebdefb9648121b119f66014539c420be58"} Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.373591 5016 generic.go:334] "Generic (PLEG): container finished" podID="d8a3bd97-03dd-4dcb-9538-76eba3893f60" containerID="966e584cbe44bbdf557a77e0bfabe996eb0038bb985a546ff199d2f564cc5db3" exitCode=0 Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.374341 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zvgbq" event={"ID":"d8a3bd97-03dd-4dcb-9538-76eba3893f60","Type":"ContainerDied","Data":"966e584cbe44bbdf557a77e0bfabe996eb0038bb985a546ff199d2f564cc5db3"} Oct 11 07:55:39 crc kubenswrapper[5016]: I1011 07:55:39.374371 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zvgbq" event={"ID":"d8a3bd97-03dd-4dcb-9538-76eba3893f60","Type":"ContainerStarted","Data":"18abbeea608c529f418688e2e8730dda809096a8cd38cde9fc93bd5ca212e89c"} Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.400599 5016 generic.go:334] "Generic (PLEG): container finished" podID="c5718a79-8ed4-45db-bcc0-f11946055cc0" containerID="c0fc3d0a66d32bf0d78d52a2e85007d333a797aa0fec504d4d9603461f524393" exitCode=0 Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.400687 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wnxjs" event={"ID":"c5718a79-8ed4-45db-bcc0-f11946055cc0","Type":"ContainerDied","Data":"c0fc3d0a66d32bf0d78d52a2e85007d333a797aa0fec504d4d9603461f524393"} Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.800181 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b7lpm" Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.806511 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zvgbq" Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.811764 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-66h42" Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.899382 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6g5g\" (UniqueName: \"kubernetes.io/projected/d8a3bd97-03dd-4dcb-9538-76eba3893f60-kube-api-access-p6g5g\") pod \"d8a3bd97-03dd-4dcb-9538-76eba3893f60\" (UID: \"d8a3bd97-03dd-4dcb-9538-76eba3893f60\") " Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.899517 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmcvq\" (UniqueName: \"kubernetes.io/projected/cf48b893-4872-446b-9e65-d7f16bd21b40-kube-api-access-zmcvq\") pod \"cf48b893-4872-446b-9e65-d7f16bd21b40\" (UID: \"cf48b893-4872-446b-9e65-d7f16bd21b40\") " Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.899625 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh6nq\" (UniqueName: \"kubernetes.io/projected/7c2cbade-3503-443a-93b5-17e53e532a6c-kube-api-access-bh6nq\") pod \"7c2cbade-3503-443a-93b5-17e53e532a6c\" (UID: \"7c2cbade-3503-443a-93b5-17e53e532a6c\") " Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.905285 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf48b893-4872-446b-9e65-d7f16bd21b40-kube-api-access-zmcvq" (OuterVolumeSpecName: "kube-api-access-zmcvq") pod "cf48b893-4872-446b-9e65-d7f16bd21b40" (UID: "cf48b893-4872-446b-9e65-d7f16bd21b40"). InnerVolumeSpecName "kube-api-access-zmcvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.905375 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a3bd97-03dd-4dcb-9538-76eba3893f60-kube-api-access-p6g5g" (OuterVolumeSpecName: "kube-api-access-p6g5g") pod "d8a3bd97-03dd-4dcb-9538-76eba3893f60" (UID: "d8a3bd97-03dd-4dcb-9538-76eba3893f60"). InnerVolumeSpecName "kube-api-access-p6g5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:40 crc kubenswrapper[5016]: I1011 07:55:40.908981 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2cbade-3503-443a-93b5-17e53e532a6c-kube-api-access-bh6nq" (OuterVolumeSpecName: "kube-api-access-bh6nq") pod "7c2cbade-3503-443a-93b5-17e53e532a6c" (UID: "7c2cbade-3503-443a-93b5-17e53e532a6c"). InnerVolumeSpecName "kube-api-access-bh6nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.001841 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh6nq\" (UniqueName: \"kubernetes.io/projected/7c2cbade-3503-443a-93b5-17e53e532a6c-kube-api-access-bh6nq\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.001906 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6g5g\" (UniqueName: \"kubernetes.io/projected/d8a3bd97-03dd-4dcb-9538-76eba3893f60-kube-api-access-p6g5g\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.001916 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmcvq\" (UniqueName: \"kubernetes.io/projected/cf48b893-4872-446b-9e65-d7f16bd21b40-kube-api-access-zmcvq\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.410462 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-b7lpm" Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.411355 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-b7lpm" event={"ID":"7c2cbade-3503-443a-93b5-17e53e532a6c","Type":"ContainerDied","Data":"28d37e5ca5e1e475cad7f0a8c2288e4116ea449e26a8a034bd0056b461d4c590"} Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.411381 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28d37e5ca5e1e475cad7f0a8c2288e4116ea449e26a8a034bd0056b461d4c590" Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.416610 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-66h42" event={"ID":"cf48b893-4872-446b-9e65-d7f16bd21b40","Type":"ContainerDied","Data":"1387c2aa2c27373f4d5fa4c7e037c4ebdefb9648121b119f66014539c420be58"} Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.416660 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1387c2aa2c27373f4d5fa4c7e037c4ebdefb9648121b119f66014539c420be58" Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.416623 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-66h42" Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.422037 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zvgbq" Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.422604 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zvgbq" event={"ID":"d8a3bd97-03dd-4dcb-9538-76eba3893f60","Type":"ContainerDied","Data":"18abbeea608c529f418688e2e8730dda809096a8cd38cde9fc93bd5ca212e89c"} Oct 11 07:55:41 crc kubenswrapper[5016]: I1011 07:55:41.422633 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18abbeea608c529f418688e2e8730dda809096a8cd38cde9fc93bd5ca212e89c" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.467141 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wnxjs" event={"ID":"c5718a79-8ed4-45db-bcc0-f11946055cc0","Type":"ContainerDied","Data":"3a3d6f35ee81994fa56badddcd2bc2e98e618e5f6bd74ec2faad5d0298b359b9"} Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.467767 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a3d6f35ee81994fa56badddcd2bc2e98e618e5f6bd74ec2faad5d0298b359b9" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.509005 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.706574 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-db-sync-config-data\") pod \"c5718a79-8ed4-45db-bcc0-f11946055cc0\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.706614 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-combined-ca-bundle\") pod \"c5718a79-8ed4-45db-bcc0-f11946055cc0\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.706734 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-config-data\") pod \"c5718a79-8ed4-45db-bcc0-f11946055cc0\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.706790 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqmqd\" (UniqueName: \"kubernetes.io/projected/c5718a79-8ed4-45db-bcc0-f11946055cc0-kube-api-access-jqmqd\") pod \"c5718a79-8ed4-45db-bcc0-f11946055cc0\" (UID: \"c5718a79-8ed4-45db-bcc0-f11946055cc0\") " Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.710610 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c5718a79-8ed4-45db-bcc0-f11946055cc0" (UID: "c5718a79-8ed4-45db-bcc0-f11946055cc0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.710643 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5718a79-8ed4-45db-bcc0-f11946055cc0-kube-api-access-jqmqd" (OuterVolumeSpecName: "kube-api-access-jqmqd") pod "c5718a79-8ed4-45db-bcc0-f11946055cc0" (UID: "c5718a79-8ed4-45db-bcc0-f11946055cc0"). InnerVolumeSpecName "kube-api-access-jqmqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.756617 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5718a79-8ed4-45db-bcc0-f11946055cc0" (UID: "c5718a79-8ed4-45db-bcc0-f11946055cc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.767257 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-config-data" (OuterVolumeSpecName: "config-data") pod "c5718a79-8ed4-45db-bcc0-f11946055cc0" (UID: "c5718a79-8ed4-45db-bcc0-f11946055cc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.808832 5016 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.808899 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.808912 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5718a79-8ed4-45db-bcc0-f11946055cc0-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:45 crc kubenswrapper[5016]: I1011 07:55:45.808922 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqmqd\" (UniqueName: \"kubernetes.io/projected/c5718a79-8ed4-45db-bcc0-f11946055cc0-kube-api-access-jqmqd\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.476249 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wnxjs" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.478825 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qjqc4" event={"ID":"0af02720-0f53-4774-b530-4fb491f32429","Type":"ContainerStarted","Data":"896ef9b7444a5953067b8a8a09d44edce31f4219fb06b445ed88be3db8489e9c"} Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.497029 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qjqc4" podStartSLOduration=3.079508614 podStartE2EDuration="9.497012824s" podCreationTimestamp="2025-10-11 07:55:37 +0000 UTC" firstStartedPulling="2025-10-11 07:55:38.889846912 +0000 UTC m=+926.790302858" lastFinishedPulling="2025-10-11 07:55:45.307351112 +0000 UTC m=+933.207807068" observedRunningTime="2025-10-11 07:55:46.495604406 +0000 UTC m=+934.396060352" watchObservedRunningTime="2025-10-11 07:55:46.497012824 +0000 UTC m=+934.397468760" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.952084 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74b7749bc7-g72pk"] Oct 11 07:55:46 crc kubenswrapper[5016]: E1011 07:55:46.952722 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2cbade-3503-443a-93b5-17e53e532a6c" containerName="mariadb-database-create" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.952737 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2cbade-3503-443a-93b5-17e53e532a6c" containerName="mariadb-database-create" Oct 11 07:55:46 crc kubenswrapper[5016]: E1011 07:55:46.952752 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a3bd97-03dd-4dcb-9538-76eba3893f60" containerName="mariadb-database-create" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.952758 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a3bd97-03dd-4dcb-9538-76eba3893f60" containerName="mariadb-database-create" Oct 11 07:55:46 crc kubenswrapper[5016]: E1011 07:55:46.952778 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5718a79-8ed4-45db-bcc0-f11946055cc0" containerName="glance-db-sync" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.952784 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5718a79-8ed4-45db-bcc0-f11946055cc0" containerName="glance-db-sync" Oct 11 07:55:46 crc kubenswrapper[5016]: E1011 07:55:46.952789 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf48b893-4872-446b-9e65-d7f16bd21b40" containerName="mariadb-database-create" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.952795 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf48b893-4872-446b-9e65-d7f16bd21b40" containerName="mariadb-database-create" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.953007 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a3bd97-03dd-4dcb-9538-76eba3893f60" containerName="mariadb-database-create" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.953019 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5718a79-8ed4-45db-bcc0-f11946055cc0" containerName="glance-db-sync" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.953037 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf48b893-4872-446b-9e65-d7f16bd21b40" containerName="mariadb-database-create" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.953049 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2cbade-3503-443a-93b5-17e53e532a6c" containerName="mariadb-database-create" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.953839 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:46 crc kubenswrapper[5016]: I1011 07:55:46.967210 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74b7749bc7-g72pk"] Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.133077 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-sb\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.133455 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-dns-svc\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.133543 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-nb\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.133693 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-config\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.133761 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz5fs\" (UniqueName: \"kubernetes.io/projected/fecd9648-bc55-4ee0-bc55-0044cc757300-kube-api-access-kz5fs\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.235284 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-nb\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.235325 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-config\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.235417 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz5fs\" (UniqueName: \"kubernetes.io/projected/fecd9648-bc55-4ee0-bc55-0044cc757300-kube-api-access-kz5fs\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.235519 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-sb\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.235548 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-dns-svc\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.236270 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-config\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.236458 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-sb\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.236461 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-dns-svc\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.237003 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-nb\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.259452 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz5fs\" (UniqueName: \"kubernetes.io/projected/fecd9648-bc55-4ee0-bc55-0044cc757300-kube-api-access-kz5fs\") pod \"dnsmasq-dns-74b7749bc7-g72pk\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.267267 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.741684 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5630-account-create-wbhsr"] Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.742942 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5630-account-create-wbhsr" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.744939 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.748923 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nj7\" (UniqueName: \"kubernetes.io/projected/6c1b5c66-e73e-4029-b13e-dad61f734028-kube-api-access-95nj7\") pod \"cinder-5630-account-create-wbhsr\" (UID: \"6c1b5c66-e73e-4029-b13e-dad61f734028\") " pod="openstack/cinder-5630-account-create-wbhsr" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.763127 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5630-account-create-wbhsr"] Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.784192 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74b7749bc7-g72pk"] Oct 11 07:55:47 crc kubenswrapper[5016]: W1011 07:55:47.792526 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfecd9648_bc55_4ee0_bc55_0044cc757300.slice/crio-e5bee8f23288efbb5c34d8c6d05643a84c809434f44cb685638ef40fd18eb4fb WatchSource:0}: Error finding container e5bee8f23288efbb5c34d8c6d05643a84c809434f44cb685638ef40fd18eb4fb: Status 404 returned error can't find the container with id e5bee8f23288efbb5c34d8c6d05643a84c809434f44cb685638ef40fd18eb4fb Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.852959 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95nj7\" (UniqueName: \"kubernetes.io/projected/6c1b5c66-e73e-4029-b13e-dad61f734028-kube-api-access-95nj7\") pod \"cinder-5630-account-create-wbhsr\" (UID: \"6c1b5c66-e73e-4029-b13e-dad61f734028\") " pod="openstack/cinder-5630-account-create-wbhsr" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.872396 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95nj7\" (UniqueName: \"kubernetes.io/projected/6c1b5c66-e73e-4029-b13e-dad61f734028-kube-api-access-95nj7\") pod \"cinder-5630-account-create-wbhsr\" (UID: \"6c1b5c66-e73e-4029-b13e-dad61f734028\") " pod="openstack/cinder-5630-account-create-wbhsr" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.938406 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-139c-account-create-khk7x"] Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.939758 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-139c-account-create-khk7x" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.941787 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.947340 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-139c-account-create-khk7x"] Oct 11 07:55:47 crc kubenswrapper[5016]: I1011 07:55:47.954107 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqlpx\" (UniqueName: \"kubernetes.io/projected/8b391075-226a-4652-998b-a896edf77c08-kube-api-access-dqlpx\") pod \"barbican-139c-account-create-khk7x\" (UID: \"8b391075-226a-4652-998b-a896edf77c08\") " pod="openstack/barbican-139c-account-create-khk7x" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.055574 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqlpx\" (UniqueName: \"kubernetes.io/projected/8b391075-226a-4652-998b-a896edf77c08-kube-api-access-dqlpx\") pod \"barbican-139c-account-create-khk7x\" (UID: \"8b391075-226a-4652-998b-a896edf77c08\") " pod="openstack/barbican-139c-account-create-khk7x" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.058398 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5630-account-create-wbhsr" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.074341 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqlpx\" (UniqueName: \"kubernetes.io/projected/8b391075-226a-4652-998b-a896edf77c08-kube-api-access-dqlpx\") pod \"barbican-139c-account-create-khk7x\" (UID: \"8b391075-226a-4652-998b-a896edf77c08\") " pod="openstack/barbican-139c-account-create-khk7x" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.148258 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-aa8f-account-create-xrlsz"] Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.149919 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aa8f-account-create-xrlsz" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.152517 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.156888 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkww4\" (UniqueName: \"kubernetes.io/projected/20f17f09-937d-4e0c-8a19-a6d6770e6d89-kube-api-access-kkww4\") pod \"neutron-aa8f-account-create-xrlsz\" (UID: \"20f17f09-937d-4e0c-8a19-a6d6770e6d89\") " pod="openstack/neutron-aa8f-account-create-xrlsz" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.157180 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-aa8f-account-create-xrlsz"] Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.273769 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkww4\" (UniqueName: \"kubernetes.io/projected/20f17f09-937d-4e0c-8a19-a6d6770e6d89-kube-api-access-kkww4\") pod \"neutron-aa8f-account-create-xrlsz\" (UID: \"20f17f09-937d-4e0c-8a19-a6d6770e6d89\") " pod="openstack/neutron-aa8f-account-create-xrlsz" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.297570 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkww4\" (UniqueName: \"kubernetes.io/projected/20f17f09-937d-4e0c-8a19-a6d6770e6d89-kube-api-access-kkww4\") pod \"neutron-aa8f-account-create-xrlsz\" (UID: \"20f17f09-937d-4e0c-8a19-a6d6770e6d89\") " pod="openstack/neutron-aa8f-account-create-xrlsz" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.314933 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-139c-account-create-khk7x" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.325376 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5630-account-create-wbhsr"] Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.479808 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aa8f-account-create-xrlsz" Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.498611 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5630-account-create-wbhsr" event={"ID":"6c1b5c66-e73e-4029-b13e-dad61f734028","Type":"ContainerStarted","Data":"074d8138fd9c1539d7773e360f1fcf83d6c3f3c696cc1bf784fbbd63014fd3e5"} Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.501351 5016 generic.go:334] "Generic (PLEG): container finished" podID="fecd9648-bc55-4ee0-bc55-0044cc757300" containerID="0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b" exitCode=0 Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.501385 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" event={"ID":"fecd9648-bc55-4ee0-bc55-0044cc757300","Type":"ContainerDied","Data":"0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b"} Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.501403 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" event={"ID":"fecd9648-bc55-4ee0-bc55-0044cc757300","Type":"ContainerStarted","Data":"e5bee8f23288efbb5c34d8c6d05643a84c809434f44cb685638ef40fd18eb4fb"} Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.766326 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-139c-account-create-khk7x"] Oct 11 07:55:48 crc kubenswrapper[5016]: W1011 07:55:48.776031 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b391075_226a_4652_998b_a896edf77c08.slice/crio-9d6cad0d2fd09bdb8377d42730c1fbe6d6d52b8d503eccbaa7f7fcf2a3dc5c17 WatchSource:0}: Error finding container 9d6cad0d2fd09bdb8377d42730c1fbe6d6d52b8d503eccbaa7f7fcf2a3dc5c17: Status 404 returned error can't find the container with id 9d6cad0d2fd09bdb8377d42730c1fbe6d6d52b8d503eccbaa7f7fcf2a3dc5c17 Oct 11 07:55:48 crc kubenswrapper[5016]: I1011 07:55:48.957478 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-aa8f-account-create-xrlsz"] Oct 11 07:55:48 crc kubenswrapper[5016]: W1011 07:55:48.961318 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20f17f09_937d_4e0c_8a19_a6d6770e6d89.slice/crio-60627b8baf1541909e722f959a4c384ddba98feba7be2e64f1a0bd36c0bb9d68 WatchSource:0}: Error finding container 60627b8baf1541909e722f959a4c384ddba98feba7be2e64f1a0bd36c0bb9d68: Status 404 returned error can't find the container with id 60627b8baf1541909e722f959a4c384ddba98feba7be2e64f1a0bd36c0bb9d68 Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.510826 5016 generic.go:334] "Generic (PLEG): container finished" podID="6c1b5c66-e73e-4029-b13e-dad61f734028" containerID="4c4e5b8562578503bd7b5f852f1b5cfe464a9ab52cb68cc3bc1518c7b647d721" exitCode=0 Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.510873 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5630-account-create-wbhsr" event={"ID":"6c1b5c66-e73e-4029-b13e-dad61f734028","Type":"ContainerDied","Data":"4c4e5b8562578503bd7b5f852f1b5cfe464a9ab52cb68cc3bc1518c7b647d721"} Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.513179 5016 generic.go:334] "Generic (PLEG): container finished" podID="8b391075-226a-4652-998b-a896edf77c08" containerID="1be38acd17148c7ae79f88913247b88f2de87f4b718f1524edc6861c556cbc9a" exitCode=0 Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.513226 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-139c-account-create-khk7x" event={"ID":"8b391075-226a-4652-998b-a896edf77c08","Type":"ContainerDied","Data":"1be38acd17148c7ae79f88913247b88f2de87f4b718f1524edc6861c556cbc9a"} Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.513283 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-139c-account-create-khk7x" event={"ID":"8b391075-226a-4652-998b-a896edf77c08","Type":"ContainerStarted","Data":"9d6cad0d2fd09bdb8377d42730c1fbe6d6d52b8d503eccbaa7f7fcf2a3dc5c17"} Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.518047 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" event={"ID":"fecd9648-bc55-4ee0-bc55-0044cc757300","Type":"ContainerStarted","Data":"1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e"} Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.518103 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.520311 5016 generic.go:334] "Generic (PLEG): container finished" podID="0af02720-0f53-4774-b530-4fb491f32429" containerID="896ef9b7444a5953067b8a8a09d44edce31f4219fb06b445ed88be3db8489e9c" exitCode=0 Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.520389 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qjqc4" event={"ID":"0af02720-0f53-4774-b530-4fb491f32429","Type":"ContainerDied","Data":"896ef9b7444a5953067b8a8a09d44edce31f4219fb06b445ed88be3db8489e9c"} Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.522120 5016 generic.go:334] "Generic (PLEG): container finished" podID="20f17f09-937d-4e0c-8a19-a6d6770e6d89" containerID="1283f78561336a5baea64fc5756d5200677eb384d3fbacfd5aad4fdec93d1f00" exitCode=0 Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.522156 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aa8f-account-create-xrlsz" event={"ID":"20f17f09-937d-4e0c-8a19-a6d6770e6d89","Type":"ContainerDied","Data":"1283f78561336a5baea64fc5756d5200677eb384d3fbacfd5aad4fdec93d1f00"} Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.522195 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aa8f-account-create-xrlsz" event={"ID":"20f17f09-937d-4e0c-8a19-a6d6770e6d89","Type":"ContainerStarted","Data":"60627b8baf1541909e722f959a4c384ddba98feba7be2e64f1a0bd36c0bb9d68"} Oct 11 07:55:49 crc kubenswrapper[5016]: I1011 07:55:49.555810 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" podStartSLOduration=3.55579342 podStartE2EDuration="3.55579342s" podCreationTimestamp="2025-10-11 07:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:55:49.547325243 +0000 UTC m=+937.447781199" watchObservedRunningTime="2025-10-11 07:55:49.55579342 +0000 UTC m=+937.456249376" Oct 11 07:55:50 crc kubenswrapper[5016]: I1011 07:55:50.948891 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.041904 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aa8f-account-create-xrlsz" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.049572 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5630-account-create-wbhsr" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.063837 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-139c-account-create-khk7x" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.135445 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-config-data\") pod \"0af02720-0f53-4774-b530-4fb491f32429\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.135576 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-combined-ca-bundle\") pod \"0af02720-0f53-4774-b530-4fb491f32429\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.135647 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnqvl\" (UniqueName: \"kubernetes.io/projected/0af02720-0f53-4774-b530-4fb491f32429-kube-api-access-jnqvl\") pod \"0af02720-0f53-4774-b530-4fb491f32429\" (UID: \"0af02720-0f53-4774-b530-4fb491f32429\") " Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.136194 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95nj7\" (UniqueName: \"kubernetes.io/projected/6c1b5c66-e73e-4029-b13e-dad61f734028-kube-api-access-95nj7\") pod \"6c1b5c66-e73e-4029-b13e-dad61f734028\" (UID: \"6c1b5c66-e73e-4029-b13e-dad61f734028\") " Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.140696 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0af02720-0f53-4774-b530-4fb491f32429-kube-api-access-jnqvl" (OuterVolumeSpecName: "kube-api-access-jnqvl") pod "0af02720-0f53-4774-b530-4fb491f32429" (UID: "0af02720-0f53-4774-b530-4fb491f32429"). InnerVolumeSpecName "kube-api-access-jnqvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.153849 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c1b5c66-e73e-4029-b13e-dad61f734028-kube-api-access-95nj7" (OuterVolumeSpecName: "kube-api-access-95nj7") pod "6c1b5c66-e73e-4029-b13e-dad61f734028" (UID: "6c1b5c66-e73e-4029-b13e-dad61f734028"). InnerVolumeSpecName "kube-api-access-95nj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.165763 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0af02720-0f53-4774-b530-4fb491f32429" (UID: "0af02720-0f53-4774-b530-4fb491f32429"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.182247 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-config-data" (OuterVolumeSpecName: "config-data") pod "0af02720-0f53-4774-b530-4fb491f32429" (UID: "0af02720-0f53-4774-b530-4fb491f32429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.237115 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkww4\" (UniqueName: \"kubernetes.io/projected/20f17f09-937d-4e0c-8a19-a6d6770e6d89-kube-api-access-kkww4\") pod \"20f17f09-937d-4e0c-8a19-a6d6770e6d89\" (UID: \"20f17f09-937d-4e0c-8a19-a6d6770e6d89\") " Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.237245 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqlpx\" (UniqueName: \"kubernetes.io/projected/8b391075-226a-4652-998b-a896edf77c08-kube-api-access-dqlpx\") pod \"8b391075-226a-4652-998b-a896edf77c08\" (UID: \"8b391075-226a-4652-998b-a896edf77c08\") " Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.237599 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.237614 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnqvl\" (UniqueName: \"kubernetes.io/projected/0af02720-0f53-4774-b530-4fb491f32429-kube-api-access-jnqvl\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.237625 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95nj7\" (UniqueName: \"kubernetes.io/projected/6c1b5c66-e73e-4029-b13e-dad61f734028-kube-api-access-95nj7\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.237634 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0af02720-0f53-4774-b530-4fb491f32429-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.240906 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b391075-226a-4652-998b-a896edf77c08-kube-api-access-dqlpx" (OuterVolumeSpecName: "kube-api-access-dqlpx") pod "8b391075-226a-4652-998b-a896edf77c08" (UID: "8b391075-226a-4652-998b-a896edf77c08"). InnerVolumeSpecName "kube-api-access-dqlpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.241406 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f17f09-937d-4e0c-8a19-a6d6770e6d89-kube-api-access-kkww4" (OuterVolumeSpecName: "kube-api-access-kkww4") pod "20f17f09-937d-4e0c-8a19-a6d6770e6d89" (UID: "20f17f09-937d-4e0c-8a19-a6d6770e6d89"). InnerVolumeSpecName "kube-api-access-kkww4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.339332 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqlpx\" (UniqueName: \"kubernetes.io/projected/8b391075-226a-4652-998b-a896edf77c08-kube-api-access-dqlpx\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.339364 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkww4\" (UniqueName: \"kubernetes.io/projected/20f17f09-937d-4e0c-8a19-a6d6770e6d89-kube-api-access-kkww4\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.539503 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5630-account-create-wbhsr" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.539477 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5630-account-create-wbhsr" event={"ID":"6c1b5c66-e73e-4029-b13e-dad61f734028","Type":"ContainerDied","Data":"074d8138fd9c1539d7773e360f1fcf83d6c3f3c696cc1bf784fbbd63014fd3e5"} Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.539920 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="074d8138fd9c1539d7773e360f1fcf83d6c3f3c696cc1bf784fbbd63014fd3e5" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.541007 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-139c-account-create-khk7x" event={"ID":"8b391075-226a-4652-998b-a896edf77c08","Type":"ContainerDied","Data":"9d6cad0d2fd09bdb8377d42730c1fbe6d6d52b8d503eccbaa7f7fcf2a3dc5c17"} Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.541044 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-139c-account-create-khk7x" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.541045 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d6cad0d2fd09bdb8377d42730c1fbe6d6d52b8d503eccbaa7f7fcf2a3dc5c17" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.542785 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qjqc4" event={"ID":"0af02720-0f53-4774-b530-4fb491f32429","Type":"ContainerDied","Data":"09a9b45bd672baab6f9a53490d3aed9ff633ea122a3006acaace1ec60f6da020"} Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.542813 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09a9b45bd672baab6f9a53490d3aed9ff633ea122a3006acaace1ec60f6da020" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.542792 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qjqc4" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.545153 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aa8f-account-create-xrlsz" event={"ID":"20f17f09-937d-4e0c-8a19-a6d6770e6d89","Type":"ContainerDied","Data":"60627b8baf1541909e722f959a4c384ddba98feba7be2e64f1a0bd36c0bb9d68"} Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.545178 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60627b8baf1541909e722f959a4c384ddba98feba7be2e64f1a0bd36c0bb9d68" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.545213 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aa8f-account-create-xrlsz" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.770918 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74b7749bc7-g72pk"] Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.771248 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" podUID="fecd9648-bc55-4ee0-bc55-0044cc757300" containerName="dnsmasq-dns" containerID="cri-o://1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e" gracePeriod=10 Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.792706 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h6l55"] Oct 11 07:55:51 crc kubenswrapper[5016]: E1011 07:55:51.793015 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b391075-226a-4652-998b-a896edf77c08" containerName="mariadb-account-create" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.793030 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b391075-226a-4652-998b-a896edf77c08" containerName="mariadb-account-create" Oct 11 07:55:51 crc kubenswrapper[5016]: E1011 07:55:51.793051 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f17f09-937d-4e0c-8a19-a6d6770e6d89" containerName="mariadb-account-create" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.793059 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f17f09-937d-4e0c-8a19-a6d6770e6d89" containerName="mariadb-account-create" Oct 11 07:55:51 crc kubenswrapper[5016]: E1011 07:55:51.793082 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0af02720-0f53-4774-b530-4fb491f32429" containerName="keystone-db-sync" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.793089 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0af02720-0f53-4774-b530-4fb491f32429" containerName="keystone-db-sync" Oct 11 07:55:51 crc kubenswrapper[5016]: E1011 07:55:51.793098 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c1b5c66-e73e-4029-b13e-dad61f734028" containerName="mariadb-account-create" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.793105 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c1b5c66-e73e-4029-b13e-dad61f734028" containerName="mariadb-account-create" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.793731 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c1b5c66-e73e-4029-b13e-dad61f734028" containerName="mariadb-account-create" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.793746 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f17f09-937d-4e0c-8a19-a6d6770e6d89" containerName="mariadb-account-create" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.793760 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="0af02720-0f53-4774-b530-4fb491f32429" containerName="keystone-db-sync" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.793772 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b391075-226a-4652-998b-a896edf77c08" containerName="mariadb-account-create" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.794274 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.796082 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.802372 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zk98n" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.802625 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.802890 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.808555 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h6l55"] Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.819976 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bcfd764f-k47q8"] Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.821731 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.827753 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bcfd764f-k47q8"] Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.904958 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfbm\" (UniqueName: \"kubernetes.io/projected/aea4901f-25a3-4b34-bfe0-a2885011e23d-kube-api-access-bwfbm\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905010 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-config-data\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905043 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-credential-keys\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905063 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-dns-svc\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905091 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-nb\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905113 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-sb\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905137 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-scripts\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905159 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-fernet-keys\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905212 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sw8c\" (UniqueName: \"kubernetes.io/projected/f60aeb37-a13d-4456-892c-492359750fc4-kube-api-access-6sw8c\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905230 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-config\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.905250 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-combined-ca-bundle\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.952761 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bfdfffbc7-ft2dc"] Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.954122 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.962807 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.969186 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.969398 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.969515 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-c2jg8" Oct 11 07:55:51 crc kubenswrapper[5016]: I1011 07:55:51.997322 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bfdfffbc7-ft2dc"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.006863 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sw8c\" (UniqueName: \"kubernetes.io/projected/f60aeb37-a13d-4456-892c-492359750fc4-kube-api-access-6sw8c\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.006927 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-config-data\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.006952 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-config\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.006991 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-combined-ca-bundle\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007032 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-scripts\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007075 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwfbm\" (UniqueName: \"kubernetes.io/projected/aea4901f-25a3-4b34-bfe0-a2885011e23d-kube-api-access-bwfbm\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007105 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-config-data\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007151 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmh8m\" (UniqueName: \"kubernetes.io/projected/1ec7360b-0d93-4691-aaad-9f3994cc00d7-kube-api-access-mmh8m\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007175 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec7360b-0d93-4691-aaad-9f3994cc00d7-logs\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007192 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-credential-keys\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007228 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ec7360b-0d93-4691-aaad-9f3994cc00d7-horizon-secret-key\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007246 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-dns-svc\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007301 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-nb\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007323 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-sb\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007354 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-scripts\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.007401 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-fernet-keys\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.008988 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-config\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.012563 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-combined-ca-bundle\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.016427 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-sb\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.016993 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-nb\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.017323 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-dns-svc\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.022455 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-fernet-keys\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.025341 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-config-data\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.035020 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-scripts\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.054198 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-credential-keys\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.055016 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sw8c\" (UniqueName: \"kubernetes.io/projected/f60aeb37-a13d-4456-892c-492359750fc4-kube-api-access-6sw8c\") pod \"dnsmasq-dns-67bcfd764f-k47q8\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.057932 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwfbm\" (UniqueName: \"kubernetes.io/projected/aea4901f-25a3-4b34-bfe0-a2885011e23d-kube-api-access-bwfbm\") pod \"keystone-bootstrap-h6l55\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.080184 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.082329 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.121631 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.121858 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.153644 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.153824 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-config-data\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.153869 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-config-data\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.153927 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-scripts\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.154008 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-scripts\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.154029 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmh8m\" (UniqueName: \"kubernetes.io/projected/1ec7360b-0d93-4691-aaad-9f3994cc00d7-kube-api-access-mmh8m\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.154049 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbjv7\" (UniqueName: \"kubernetes.io/projected/353d22c0-bfdb-4599-a97c-9000eda08e3d-kube-api-access-hbjv7\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.154068 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-run-httpd\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.154082 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec7360b-0d93-4691-aaad-9f3994cc00d7-logs\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.154103 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-log-httpd\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.154129 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ec7360b-0d93-4691-aaad-9f3994cc00d7-horizon-secret-key\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.154144 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.154208 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.155645 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-scripts\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.156132 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec7360b-0d93-4691-aaad-9f3994cc00d7-logs\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.157072 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-config-data\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.170946 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.186745 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmh8m\" (UniqueName: \"kubernetes.io/projected/1ec7360b-0d93-4691-aaad-9f3994cc00d7-kube-api-access-mmh8m\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.210059 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ec7360b-0d93-4691-aaad-9f3994cc00d7-horizon-secret-key\") pod \"horizon-7bfdfffbc7-ft2dc\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.252835 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.255881 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-scripts\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.255929 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbjv7\" (UniqueName: \"kubernetes.io/projected/353d22c0-bfdb-4599-a97c-9000eda08e3d-kube-api-access-hbjv7\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.255947 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-run-httpd\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.255963 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-log-httpd\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.255995 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.256037 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.256104 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-config-data\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.259724 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-run-httpd\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.268958 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.269358 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.270041 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-log-httpd\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.270621 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-scripts\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.276425 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6679b846c9-4jxzp"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.287463 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.300239 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.312268 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-config-data\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.326462 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbjv7\" (UniqueName: \"kubernetes.io/projected/353d22c0-bfdb-4599-a97c-9000eda08e3d-kube-api-access-hbjv7\") pod \"ceilometer-0\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.327568 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.369825 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-config-data\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.369914 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7zhw\" (UniqueName: \"kubernetes.io/projected/4acf875d-ca40-47ff-a2e9-cdf09c447232-kube-api-access-g7zhw\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.369934 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4acf875d-ca40-47ff-a2e9-cdf09c447232-horizon-secret-key\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.369965 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-scripts\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.370048 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4acf875d-ca40-47ff-a2e9-cdf09c447232-logs\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.374345 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6679b846c9-4jxzp"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.392320 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-w78ms"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.393397 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.403324 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.404683 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-btndg" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.413629 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471614 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-scripts\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471674 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-config-data\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471697 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b78f643d-d3c2-4cf1-8bb3-ee749e569273-logs\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471715 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9n4m\" (UniqueName: \"kubernetes.io/projected/b78f643d-d3c2-4cf1-8bb3-ee749e569273-kube-api-access-f9n4m\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471762 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4acf875d-ca40-47ff-a2e9-cdf09c447232-horizon-secret-key\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471779 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7zhw\" (UniqueName: \"kubernetes.io/projected/4acf875d-ca40-47ff-a2e9-cdf09c447232-kube-api-access-g7zhw\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471793 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-config-data\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471821 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-combined-ca-bundle\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471842 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-scripts\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.471895 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4acf875d-ca40-47ff-a2e9-cdf09c447232-logs\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.472976 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4acf875d-ca40-47ff-a2e9-cdf09c447232-logs\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.474077 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-config-data\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.475643 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-scripts\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.482551 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4acf875d-ca40-47ff-a2e9-cdf09c447232-horizon-secret-key\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.502643 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7zhw\" (UniqueName: \"kubernetes.io/projected/4acf875d-ca40-47ff-a2e9-cdf09c447232-kube-api-access-g7zhw\") pod \"horizon-6679b846c9-4jxzp\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.502795 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w78ms"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.522538 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bcfd764f-k47q8"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.544585 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b99bccc6c-868d2"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.545910 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.572324 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.572510 5016 generic.go:334] "Generic (PLEG): container finished" podID="fecd9648-bc55-4ee0-bc55-0044cc757300" containerID="1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e" exitCode=0 Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.572532 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-sb\") pod \"fecd9648-bc55-4ee0-bc55-0044cc757300\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.572538 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" event={"ID":"fecd9648-bc55-4ee0-bc55-0044cc757300","Type":"ContainerDied","Data":"1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e"} Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.572558 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" event={"ID":"fecd9648-bc55-4ee0-bc55-0044cc757300","Type":"ContainerDied","Data":"e5bee8f23288efbb5c34d8c6d05643a84c809434f44cb685638ef40fd18eb4fb"} Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.572575 5016 scope.go:117] "RemoveContainer" containerID="1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.572739 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz5fs\" (UniqueName: \"kubernetes.io/projected/fecd9648-bc55-4ee0-bc55-0044cc757300-kube-api-access-kz5fs\") pod \"fecd9648-bc55-4ee0-bc55-0044cc757300\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.572850 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-dns-svc\") pod \"fecd9648-bc55-4ee0-bc55-0044cc757300\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573028 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-scripts\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573067 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b78f643d-d3c2-4cf1-8bb3-ee749e569273-logs\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573083 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9n4m\" (UniqueName: \"kubernetes.io/projected/b78f643d-d3c2-4cf1-8bb3-ee749e569273-kube-api-access-f9n4m\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573125 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-sb\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573191 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-config-data\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573230 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-nb\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573253 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-combined-ca-bundle\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573373 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-config\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573390 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hq75\" (UniqueName: \"kubernetes.io/projected/088fb899-923b-4215-9d9a-22bef9a6891b-kube-api-access-5hq75\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573429 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-dns-svc\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.573593 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b78f643d-d3c2-4cf1-8bb3-ee749e569273-logs\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.579224 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-config-data\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.579722 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-scripts\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.589049 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b99bccc6c-868d2"] Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.589046 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fecd9648-bc55-4ee0-bc55-0044cc757300-kube-api-access-kz5fs" (OuterVolumeSpecName: "kube-api-access-kz5fs") pod "fecd9648-bc55-4ee0-bc55-0044cc757300" (UID: "fecd9648-bc55-4ee0-bc55-0044cc757300"). InnerVolumeSpecName "kube-api-access-kz5fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.594320 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-combined-ca-bundle\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.616757 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9n4m\" (UniqueName: \"kubernetes.io/projected/b78f643d-d3c2-4cf1-8bb3-ee749e569273-kube-api-access-f9n4m\") pod \"placement-db-sync-w78ms\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.627940 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fecd9648-bc55-4ee0-bc55-0044cc757300" (UID: "fecd9648-bc55-4ee0-bc55-0044cc757300"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.643567 5016 scope.go:117] "RemoveContainer" containerID="0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.655150 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.668387 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fecd9648-bc55-4ee0-bc55-0044cc757300" (UID: "fecd9648-bc55-4ee0-bc55-0044cc757300"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.672316 5016 scope.go:117] "RemoveContainer" containerID="1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e" Oct 11 07:55:52 crc kubenswrapper[5016]: E1011 07:55:52.672729 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e\": container with ID starting with 1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e not found: ID does not exist" containerID="1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.672762 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e"} err="failed to get container status \"1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e\": rpc error: code = NotFound desc = could not find container \"1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e\": container with ID starting with 1f9093dcfc1222b0120a08927d00b243c23772aa6dc1c5a1db3bb77bf371dd6e not found: ID does not exist" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.672782 5016 scope.go:117] "RemoveContainer" containerID="0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b" Oct 11 07:55:52 crc kubenswrapper[5016]: E1011 07:55:52.673052 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b\": container with ID starting with 0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b not found: ID does not exist" containerID="0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.673079 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b"} err="failed to get container status \"0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b\": rpc error: code = NotFound desc = could not find container \"0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b\": container with ID starting with 0d7d6813e515e8aad111b1edda50c9c6e592f524fe8152ecb796cea62becd98b not found: ID does not exist" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.674884 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-dns-svc\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.674950 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-sb\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.674996 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-nb\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.676109 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-dns-svc\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.676432 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-config\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.676461 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hq75\" (UniqueName: \"kubernetes.io/projected/088fb899-923b-4215-9d9a-22bef9a6891b-kube-api-access-5hq75\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.677817 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-sb\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.677844 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz5fs\" (UniqueName: \"kubernetes.io/projected/fecd9648-bc55-4ee0-bc55-0044cc757300-kube-api-access-kz5fs\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.678254 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.678297 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-nb\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.678373 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.678935 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-config\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.691242 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hq75\" (UniqueName: \"kubernetes.io/projected/088fb899-923b-4215-9d9a-22bef9a6891b-kube-api-access-5hq75\") pod \"dnsmasq-dns-7b99bccc6c-868d2\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.719505 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w78ms" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.780518 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-nb\") pod \"fecd9648-bc55-4ee0-bc55-0044cc757300\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.780650 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-config\") pod \"fecd9648-bc55-4ee0-bc55-0044cc757300\" (UID: \"fecd9648-bc55-4ee0-bc55-0044cc757300\") " Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.837264 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fecd9648-bc55-4ee0-bc55-0044cc757300" (UID: "fecd9648-bc55-4ee0-bc55-0044cc757300"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.841962 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-config" (OuterVolumeSpecName: "config") pod "fecd9648-bc55-4ee0-bc55-0044cc757300" (UID: "fecd9648-bc55-4ee0-bc55-0044cc757300"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.848446 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bcfd764f-k47q8"] Oct 11 07:55:52 crc kubenswrapper[5016]: W1011 07:55:52.851967 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf60aeb37_a13d_4456_892c_492359750fc4.slice/crio-e6222a2c06fad2dd37205df1ad69877c3d89be2e110a0a7737e063b003331114 WatchSource:0}: Error finding container e6222a2c06fad2dd37205df1ad69877c3d89be2e110a0a7737e063b003331114: Status 404 returned error can't find the container with id e6222a2c06fad2dd37205df1ad69877c3d89be2e110a0a7737e063b003331114 Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.878144 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.883528 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.883553 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecd9648-bc55-4ee0-bc55-0044cc757300-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.951752 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xqmfx"] Oct 11 07:55:52 crc kubenswrapper[5016]: E1011 07:55:52.956735 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecd9648-bc55-4ee0-bc55-0044cc757300" containerName="init" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.956771 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecd9648-bc55-4ee0-bc55-0044cc757300" containerName="init" Oct 11 07:55:52 crc kubenswrapper[5016]: E1011 07:55:52.956793 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fecd9648-bc55-4ee0-bc55-0044cc757300" containerName="dnsmasq-dns" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.956800 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="fecd9648-bc55-4ee0-bc55-0044cc757300" containerName="dnsmasq-dns" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.957219 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="fecd9648-bc55-4ee0-bc55-0044cc757300" containerName="dnsmasq-dns" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.958031 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.962091 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-b2hmf" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.962318 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 11 07:55:52 crc kubenswrapper[5016]: I1011 07:55:52.962977 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.001785 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xqmfx"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.043406 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.074358 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bfdfffbc7-ft2dc"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.082296 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h6l55"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.091390 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-config-data\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.091449 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-scripts\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.091474 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-db-sync-config-data\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.091497 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7p95\" (UniqueName: \"kubernetes.io/projected/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-kube-api-access-k7p95\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.091532 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-etc-machine-id\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.091577 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-combined-ca-bundle\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.192633 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-db-sync-config-data\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.192705 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7p95\" (UniqueName: \"kubernetes.io/projected/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-kube-api-access-k7p95\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.192760 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-etc-machine-id\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.192832 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-combined-ca-bundle\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.192892 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-config-data\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.192940 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-scripts\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.193558 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-etc-machine-id\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.199445 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-combined-ca-bundle\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.201959 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-db-sync-config-data\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.204030 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-scripts\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.204323 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-config-data\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.216488 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7p95\" (UniqueName: \"kubernetes.io/projected/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-kube-api-access-k7p95\") pod \"cinder-db-sync-xqmfx\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.250883 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6679b846c9-4jxzp"] Oct 11 07:55:53 crc kubenswrapper[5016]: W1011 07:55:53.252071 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4acf875d_ca40_47ff_a2e9_cdf09c447232.slice/crio-a7682a86a7d767e73eb1d0f63ca927f0fc82b8a33f9fca28fb269d8c33dac32e WatchSource:0}: Error finding container a7682a86a7d767e73eb1d0f63ca927f0fc82b8a33f9fca28fb269d8c33dac32e: Status 404 returned error can't find the container with id a7682a86a7d767e73eb1d0f63ca927f0fc82b8a33f9fca28fb269d8c33dac32e Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.304914 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-92qrj"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.309628 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.317883 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.318110 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gz92h" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.323517 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-92qrj"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.323541 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.366252 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w78ms"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.436814 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rfhnk"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.438123 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.440603 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-g2sfr" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.440828 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.441005 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.452995 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rfhnk"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.491841 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b99bccc6c-868d2"] Oct 11 07:55:53 crc kubenswrapper[5016]: W1011 07:55:53.495245 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod088fb899_923b_4215_9d9a_22bef9a6891b.slice/crio-df4305c8158e526982c43dc85018d37c1359d884b26c79b18a3c975f9c0f953a WatchSource:0}: Error finding container df4305c8158e526982c43dc85018d37c1359d884b26c79b18a3c975f9c0f953a: Status 404 returned error can't find the container with id df4305c8158e526982c43dc85018d37c1359d884b26c79b18a3c975f9c0f953a Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.501404 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-db-sync-config-data\") pod \"barbican-db-sync-92qrj\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.501501 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-combined-ca-bundle\") pod \"barbican-db-sync-92qrj\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.501578 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfpf4\" (UniqueName: \"kubernetes.io/projected/d426ddd3-5eae-4816-a141-32b614642d39-kube-api-access-qfpf4\") pod \"barbican-db-sync-92qrj\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.604086 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-config\") pod \"neutron-db-sync-rfhnk\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.604128 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cc6x\" (UniqueName: \"kubernetes.io/projected/492cebf0-6a35-4ce7-8c85-2298fd8ae390-kube-api-access-6cc6x\") pod \"neutron-db-sync-rfhnk\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.604165 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-combined-ca-bundle\") pod \"neutron-db-sync-rfhnk\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.604279 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-db-sync-config-data\") pod \"barbican-db-sync-92qrj\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.604349 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-combined-ca-bundle\") pod \"barbican-db-sync-92qrj\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.604378 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfpf4\" (UniqueName: \"kubernetes.io/projected/d426ddd3-5eae-4816-a141-32b614642d39-kube-api-access-qfpf4\") pod \"barbican-db-sync-92qrj\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.609370 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-combined-ca-bundle\") pod \"barbican-db-sync-92qrj\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.609815 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-db-sync-config-data\") pod \"barbican-db-sync-92qrj\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.640254 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfpf4\" (UniqueName: \"kubernetes.io/projected/d426ddd3-5eae-4816-a141-32b614642d39-kube-api-access-qfpf4\") pod \"barbican-db-sync-92qrj\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.645044 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-92qrj" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.682666 5016 generic.go:334] "Generic (PLEG): container finished" podID="f60aeb37-a13d-4456-892c-492359750fc4" containerID="b992d70b5d5f77b324f6facd44db5b8c17cc5d26f4b3bde0bf8fe69b63712b0e" exitCode=0 Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.682730 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" event={"ID":"f60aeb37-a13d-4456-892c-492359750fc4","Type":"ContainerDied","Data":"b992d70b5d5f77b324f6facd44db5b8c17cc5d26f4b3bde0bf8fe69b63712b0e"} Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.682761 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" event={"ID":"f60aeb37-a13d-4456-892c-492359750fc4","Type":"ContainerStarted","Data":"e6222a2c06fad2dd37205df1ad69877c3d89be2e110a0a7737e063b003331114"} Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.684478 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerStarted","Data":"2be896df43fc723fb43eea0f267bd5b5734d2712aed2bedcd96431d28354fd0a"} Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.694811 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w78ms" event={"ID":"b78f643d-d3c2-4cf1-8bb3-ee749e569273","Type":"ContainerStarted","Data":"401a198e5c06d9f5f32c01c71240cb433806691e164ab67553204b262d03223d"} Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.697173 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bfdfffbc7-ft2dc" event={"ID":"1ec7360b-0d93-4691-aaad-9f3994cc00d7","Type":"ContainerStarted","Data":"fb6ed142ad962d5a2d040b1db7fb659a0fa4f928e5eb68bb3ea8d88ee62eeb8f"} Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.699310 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6679b846c9-4jxzp" event={"ID":"4acf875d-ca40-47ff-a2e9-cdf09c447232","Type":"ContainerStarted","Data":"a7682a86a7d767e73eb1d0f63ca927f0fc82b8a33f9fca28fb269d8c33dac32e"} Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.706623 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-config\") pod \"neutron-db-sync-rfhnk\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.706687 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cc6x\" (UniqueName: \"kubernetes.io/projected/492cebf0-6a35-4ce7-8c85-2298fd8ae390-kube-api-access-6cc6x\") pod \"neutron-db-sync-rfhnk\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.706714 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-combined-ca-bundle\") pod \"neutron-db-sync-rfhnk\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.706774 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b7749bc7-g72pk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.716974 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h6l55" event={"ID":"aea4901f-25a3-4b34-bfe0-a2885011e23d","Type":"ContainerStarted","Data":"ab81df700d429fccd65853a35c8ef3e51e5a83e1ea88d2a2880ed7c7c1be3b95"} Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.717022 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h6l55" event={"ID":"aea4901f-25a3-4b34-bfe0-a2885011e23d","Type":"ContainerStarted","Data":"2689d9c97faef0b31f37117c51b83556803f49826de3fe232c0e4f2ee9bb2bbe"} Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.719751 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-config\") pod \"neutron-db-sync-rfhnk\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.722959 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" event={"ID":"088fb899-923b-4215-9d9a-22bef9a6891b","Type":"ContainerStarted","Data":"df4305c8158e526982c43dc85018d37c1359d884b26c79b18a3c975f9c0f953a"} Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.725416 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-combined-ca-bundle\") pod \"neutron-db-sync-rfhnk\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.726454 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cc6x\" (UniqueName: \"kubernetes.io/projected/492cebf0-6a35-4ce7-8c85-2298fd8ae390-kube-api-access-6cc6x\") pod \"neutron-db-sync-rfhnk\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.741757 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74b7749bc7-g72pk"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.750144 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74b7749bc7-g72pk"] Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.752359 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h6l55" podStartSLOduration=2.752339928 podStartE2EDuration="2.752339928s" podCreationTimestamp="2025-10-11 07:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:55:53.74904774 +0000 UTC m=+941.649503686" watchObservedRunningTime="2025-10-11 07:55:53.752339928 +0000 UTC m=+941.652795874" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.756538 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:55:53 crc kubenswrapper[5016]: I1011 07:55:53.928057 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xqmfx"] Oct 11 07:55:53 crc kubenswrapper[5016]: W1011 07:55:53.942416 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ebaa0ef_dce1_4ff4_a51c_69435ca86699.slice/crio-0c6780c75657dfff6550f3d876c78c64357afd709f394e482ca5615edb8cf69c WatchSource:0}: Error finding container 0c6780c75657dfff6550f3d876c78c64357afd709f394e482ca5615edb8cf69c: Status 404 returned error can't find the container with id 0c6780c75657dfff6550f3d876c78c64357afd709f394e482ca5615edb8cf69c Oct 11 07:55:54 crc kubenswrapper[5016]: I1011 07:55:54.077182 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.228826 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-dns-svc\") pod \"f60aeb37-a13d-4456-892c-492359750fc4\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.228984 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-nb\") pod \"f60aeb37-a13d-4456-892c-492359750fc4\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.229040 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-sb\") pod \"f60aeb37-a13d-4456-892c-492359750fc4\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.229097 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-config\") pod \"f60aeb37-a13d-4456-892c-492359750fc4\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.229190 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sw8c\" (UniqueName: \"kubernetes.io/projected/f60aeb37-a13d-4456-892c-492359750fc4-kube-api-access-6sw8c\") pod \"f60aeb37-a13d-4456-892c-492359750fc4\" (UID: \"f60aeb37-a13d-4456-892c-492359750fc4\") " Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.267975 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f60aeb37-a13d-4456-892c-492359750fc4" (UID: "f60aeb37-a13d-4456-892c-492359750fc4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.273387 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-config" (OuterVolumeSpecName: "config") pod "f60aeb37-a13d-4456-892c-492359750fc4" (UID: "f60aeb37-a13d-4456-892c-492359750fc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.273548 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f60aeb37-a13d-4456-892c-492359750fc4" (UID: "f60aeb37-a13d-4456-892c-492359750fc4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.275340 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60aeb37-a13d-4456-892c-492359750fc4-kube-api-access-6sw8c" (OuterVolumeSpecName: "kube-api-access-6sw8c") pod "f60aeb37-a13d-4456-892c-492359750fc4" (UID: "f60aeb37-a13d-4456-892c-492359750fc4"). InnerVolumeSpecName "kube-api-access-6sw8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.296162 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f60aeb37-a13d-4456-892c-492359750fc4" (UID: "f60aeb37-a13d-4456-892c-492359750fc4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.297975 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-92qrj"] Oct 11 07:55:55 crc kubenswrapper[5016]: W1011 07:55:54.317919 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd426ddd3_5eae_4816_a141_32b614642d39.slice/crio-e9522503a62238dafbf67087e083926c5291f566a421db4807f33c0d26e7f91c WatchSource:0}: Error finding container e9522503a62238dafbf67087e083926c5291f566a421db4807f33c0d26e7f91c: Status 404 returned error can't find the container with id e9522503a62238dafbf67087e083926c5291f566a421db4807f33c0d26e7f91c Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.330884 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.330913 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.330925 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.330935 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f60aeb37-a13d-4456-892c-492359750fc4-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.330946 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sw8c\" (UniqueName: \"kubernetes.io/projected/f60aeb37-a13d-4456-892c-492359750fc4-kube-api-access-6sw8c\") on node \"crc\" DevicePath \"\"" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.364736 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bfdfffbc7-ft2dc"] Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.419628 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.440001 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76ff89bf89-rvq2g"] Oct 11 07:55:55 crc kubenswrapper[5016]: E1011 07:55:54.441039 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60aeb37-a13d-4456-892c-492359750fc4" containerName="init" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.441059 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60aeb37-a13d-4456-892c-492359750fc4" containerName="init" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.441582 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60aeb37-a13d-4456-892c-492359750fc4" containerName="init" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.470689 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.488388 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rfhnk"] Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.521915 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76ff89bf89-rvq2g"] Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.645233 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-config-data\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.645383 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-horizon-secret-key\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.645437 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-logs\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.645455 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-scripts\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.645548 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csbkt\" (UniqueName: \"kubernetes.io/projected/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-kube-api-access-csbkt\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.736952 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xqmfx" event={"ID":"8ebaa0ef-dce1-4ff4-a51c-69435ca86699","Type":"ContainerStarted","Data":"0c6780c75657dfff6550f3d876c78c64357afd709f394e482ca5615edb8cf69c"} Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.740556 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-92qrj" event={"ID":"d426ddd3-5eae-4816-a141-32b614642d39","Type":"ContainerStarted","Data":"e9522503a62238dafbf67087e083926c5291f566a421db4807f33c0d26e7f91c"} Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.745625 5016 generic.go:334] "Generic (PLEG): container finished" podID="088fb899-923b-4215-9d9a-22bef9a6891b" containerID="1b3c52a99d399be20d25ab80f71bcc70e4fcc5171cf6249741193c27bb362312" exitCode=0 Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.746921 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-horizon-secret-key\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.746958 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-logs\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.747054 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-scripts\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.747086 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csbkt\" (UniqueName: \"kubernetes.io/projected/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-kube-api-access-csbkt\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.747289 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-config-data\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.748587 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" event={"ID":"088fb899-923b-4215-9d9a-22bef9a6891b","Type":"ContainerDied","Data":"1b3c52a99d399be20d25ab80f71bcc70e4fcc5171cf6249741193c27bb362312"} Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.748759 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-scripts\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.748881 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-logs\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.749951 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-config-data\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.758782 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" event={"ID":"f60aeb37-a13d-4456-892c-492359750fc4","Type":"ContainerDied","Data":"e6222a2c06fad2dd37205df1ad69877c3d89be2e110a0a7737e063b003331114"} Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.758841 5016 scope.go:117] "RemoveContainer" containerID="b992d70b5d5f77b324f6facd44db5b8c17cc5d26f4b3bde0bf8fe69b63712b0e" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.760670 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bcfd764f-k47q8" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.762072 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-horizon-secret-key\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.772788 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csbkt\" (UniqueName: \"kubernetes.io/projected/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-kube-api-access-csbkt\") pod \"horizon-76ff89bf89-rvq2g\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.780463 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rfhnk" event={"ID":"492cebf0-6a35-4ce7-8c85-2298fd8ae390","Type":"ContainerStarted","Data":"35b602e1aea260977edbecf212f1396d57d7d31fa3e39bf7613fe3d0bad17e0e"} Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.824087 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.899726 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bcfd764f-k47q8"] Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:54.901146 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bcfd764f-k47q8"] Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:55.151985 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60aeb37-a13d-4456-892c-492359750fc4" path="/var/lib/kubelet/pods/f60aeb37-a13d-4456-892c-492359750fc4/volumes" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:55.153025 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fecd9648-bc55-4ee0-bc55-0044cc757300" path="/var/lib/kubelet/pods/fecd9648-bc55-4ee0-bc55-0044cc757300/volumes" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:55.481968 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76ff89bf89-rvq2g"] Oct 11 07:55:55 crc kubenswrapper[5016]: W1011 07:55:55.501600 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf835037_4e3d_4b3f_80bc_7629cfd8da5c.slice/crio-e52a1b59ce7bfe2f7dbaa10ed8f663f4bfcdeef376816a96a4cd9db0e90a6dd7 WatchSource:0}: Error finding container e52a1b59ce7bfe2f7dbaa10ed8f663f4bfcdeef376816a96a4cd9db0e90a6dd7: Status 404 returned error can't find the container with id e52a1b59ce7bfe2f7dbaa10ed8f663f4bfcdeef376816a96a4cd9db0e90a6dd7 Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:55.799514 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rfhnk" event={"ID":"492cebf0-6a35-4ce7-8c85-2298fd8ae390","Type":"ContainerStarted","Data":"4e292029c9be340a8aa5bfb997745320e4060a28e6ade7a360225c2ba9aa8f75"} Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:55.805205 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76ff89bf89-rvq2g" event={"ID":"cf835037-4e3d-4b3f-80bc-7629cfd8da5c","Type":"ContainerStarted","Data":"e52a1b59ce7bfe2f7dbaa10ed8f663f4bfcdeef376816a96a4cd9db0e90a6dd7"} Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:55.816727 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" event={"ID":"088fb899-923b-4215-9d9a-22bef9a6891b","Type":"ContainerStarted","Data":"a0256d917e8a0866c9f26abee6b9dbb368b46db21791e06f3e766df0b183095b"} Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:55.818011 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:55:55 crc kubenswrapper[5016]: I1011 07:55:55.830590 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rfhnk" podStartSLOduration=2.830573854 podStartE2EDuration="2.830573854s" podCreationTimestamp="2025-10-11 07:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:55:55.815885721 +0000 UTC m=+943.716341677" watchObservedRunningTime="2025-10-11 07:55:55.830573854 +0000 UTC m=+943.731029800" Oct 11 07:55:56 crc kubenswrapper[5016]: I1011 07:55:56.830300 5016 generic.go:334] "Generic (PLEG): container finished" podID="aea4901f-25a3-4b34-bfe0-a2885011e23d" containerID="ab81df700d429fccd65853a35c8ef3e51e5a83e1ea88d2a2880ed7c7c1be3b95" exitCode=0 Oct 11 07:55:56 crc kubenswrapper[5016]: I1011 07:55:56.830385 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h6l55" event={"ID":"aea4901f-25a3-4b34-bfe0-a2885011e23d","Type":"ContainerDied","Data":"ab81df700d429fccd65853a35c8ef3e51e5a83e1ea88d2a2880ed7c7c1be3b95"} Oct 11 07:55:56 crc kubenswrapper[5016]: I1011 07:55:56.867346 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" podStartSLOduration=4.86732842 podStartE2EDuration="4.86732842s" podCreationTimestamp="2025-10-11 07:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:55:55.844063986 +0000 UTC m=+943.744519932" watchObservedRunningTime="2025-10-11 07:55:56.86732842 +0000 UTC m=+944.767784366" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.238716 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6679b846c9-4jxzp"] Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.302211 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-df49866-g5nkl"] Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.309051 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.314318 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.315306 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-df49866-g5nkl"] Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.347804 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76ff89bf89-rvq2g"] Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.370506 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65987df486-lvrh6"] Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.372258 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.400596 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65987df486-lvrh6"] Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.402114 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/771aebc7-25b0-45ef-bbd4-ed6c367b998b-logs\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.402405 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-secret-key\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.402551 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-config-data\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.402842 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-combined-ca-bundle\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.402963 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrhr\" (UniqueName: \"kubernetes.io/projected/771aebc7-25b0-45ef-bbd4-ed6c367b998b-kube-api-access-6zrhr\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.403059 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-tls-certs\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.403156 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-scripts\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505089 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e9db46-849a-4957-a6ff-5a05cb5c9744-horizon-tls-certs\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505147 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3e9db46-849a-4957-a6ff-5a05cb5c9744-config-data\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505172 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/771aebc7-25b0-45ef-bbd4-ed6c367b998b-logs\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505288 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5lt\" (UniqueName: \"kubernetes.io/projected/e3e9db46-849a-4957-a6ff-5a05cb5c9744-kube-api-access-zs5lt\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505399 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e9db46-849a-4957-a6ff-5a05cb5c9744-combined-ca-bundle\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505469 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/771aebc7-25b0-45ef-bbd4-ed6c367b998b-logs\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505732 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3e9db46-849a-4957-a6ff-5a05cb5c9744-horizon-secret-key\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505832 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3e9db46-849a-4957-a6ff-5a05cb5c9744-scripts\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505865 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-secret-key\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505894 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-config-data\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505920 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e9db46-849a-4957-a6ff-5a05cb5c9744-logs\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505938 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-combined-ca-bundle\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505963 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zrhr\" (UniqueName: \"kubernetes.io/projected/771aebc7-25b0-45ef-bbd4-ed6c367b998b-kube-api-access-6zrhr\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.505992 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-tls-certs\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.506020 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-scripts\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.506757 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-scripts\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.509417 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-config-data\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.515513 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-secret-key\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.515531 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-combined-ca-bundle\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.519140 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-tls-certs\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.551519 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zrhr\" (UniqueName: \"kubernetes.io/projected/771aebc7-25b0-45ef-bbd4-ed6c367b998b-kube-api-access-6zrhr\") pod \"horizon-df49866-g5nkl\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.608036 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e9db46-849a-4957-a6ff-5a05cb5c9744-horizon-tls-certs\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.608113 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3e9db46-849a-4957-a6ff-5a05cb5c9744-config-data\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.608151 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs5lt\" (UniqueName: \"kubernetes.io/projected/e3e9db46-849a-4957-a6ff-5a05cb5c9744-kube-api-access-zs5lt\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.608196 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e9db46-849a-4957-a6ff-5a05cb5c9744-combined-ca-bundle\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.608260 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3e9db46-849a-4957-a6ff-5a05cb5c9744-horizon-secret-key\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.608289 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3e9db46-849a-4957-a6ff-5a05cb5c9744-scripts\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.608339 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e9db46-849a-4957-a6ff-5a05cb5c9744-logs\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.608806 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e9db46-849a-4957-a6ff-5a05cb5c9744-logs\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.609570 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3e9db46-849a-4957-a6ff-5a05cb5c9744-scripts\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.611386 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3e9db46-849a-4957-a6ff-5a05cb5c9744-config-data\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.611615 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3e9db46-849a-4957-a6ff-5a05cb5c9744-horizon-secret-key\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.611789 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3e9db46-849a-4957-a6ff-5a05cb5c9744-horizon-tls-certs\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.613162 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e9db46-849a-4957-a6ff-5a05cb5c9744-combined-ca-bundle\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.630404 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs5lt\" (UniqueName: \"kubernetes.io/projected/e3e9db46-849a-4957-a6ff-5a05cb5c9744-kube-api-access-zs5lt\") pod \"horizon-65987df486-lvrh6\" (UID: \"e3e9db46-849a-4957-a6ff-5a05cb5c9744\") " pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.664236 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:01 crc kubenswrapper[5016]: I1011 07:56:01.700073 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:02 crc kubenswrapper[5016]: I1011 07:56:02.880370 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:56:02 crc kubenswrapper[5016]: I1011 07:56:02.952354 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-pp672"] Oct 11 07:56:02 crc kubenswrapper[5016]: I1011 07:56:02.952852 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="dnsmasq-dns" containerID="cri-o://771deb6c67292b4198ad4ea96f2b0f16331d4e77e89e4cadcd7a8338abfe354f" gracePeriod=10 Oct 11 07:56:03 crc kubenswrapper[5016]: I1011 07:56:03.917390 5016 generic.go:334] "Generic (PLEG): container finished" podID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerID="771deb6c67292b4198ad4ea96f2b0f16331d4e77e89e4cadcd7a8338abfe354f" exitCode=0 Oct 11 07:56:03 crc kubenswrapper[5016]: I1011 07:56:03.917435 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" event={"ID":"c213bf50-5935-48bb-be54-2e1396bc6e06","Type":"ContainerDied","Data":"771deb6c67292b4198ad4ea96f2b0f16331d4e77e89e4cadcd7a8338abfe354f"} Oct 11 07:56:10 crc kubenswrapper[5016]: I1011 07:56:10.724030 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Oct 11 07:56:15 crc kubenswrapper[5016]: I1011 07:56:15.724940 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Oct 11 07:56:20 crc kubenswrapper[5016]: I1011 07:56:20.726054 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Oct 11 07:56:20 crc kubenswrapper[5016]: I1011 07:56:20.726926 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.706150 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.727417 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.783051 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-config-data\") pod \"aea4901f-25a3-4b34-bfe0-a2885011e23d\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.783275 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwfbm\" (UniqueName: \"kubernetes.io/projected/aea4901f-25a3-4b34-bfe0-a2885011e23d-kube-api-access-bwfbm\") pod \"aea4901f-25a3-4b34-bfe0-a2885011e23d\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.783379 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-combined-ca-bundle\") pod \"aea4901f-25a3-4b34-bfe0-a2885011e23d\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.783422 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-fernet-keys\") pod \"aea4901f-25a3-4b34-bfe0-a2885011e23d\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.783486 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-credential-keys\") pod \"aea4901f-25a3-4b34-bfe0-a2885011e23d\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.783602 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-scripts\") pod \"aea4901f-25a3-4b34-bfe0-a2885011e23d\" (UID: \"aea4901f-25a3-4b34-bfe0-a2885011e23d\") " Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.789424 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-scripts" (OuterVolumeSpecName: "scripts") pod "aea4901f-25a3-4b34-bfe0-a2885011e23d" (UID: "aea4901f-25a3-4b34-bfe0-a2885011e23d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.790248 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aea4901f-25a3-4b34-bfe0-a2885011e23d" (UID: "aea4901f-25a3-4b34-bfe0-a2885011e23d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.790297 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "aea4901f-25a3-4b34-bfe0-a2885011e23d" (UID: "aea4901f-25a3-4b34-bfe0-a2885011e23d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.807343 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea4901f-25a3-4b34-bfe0-a2885011e23d-kube-api-access-bwfbm" (OuterVolumeSpecName: "kube-api-access-bwfbm") pod "aea4901f-25a3-4b34-bfe0-a2885011e23d" (UID: "aea4901f-25a3-4b34-bfe0-a2885011e23d"). InnerVolumeSpecName "kube-api-access-bwfbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.809055 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-config-data" (OuterVolumeSpecName: "config-data") pod "aea4901f-25a3-4b34-bfe0-a2885011e23d" (UID: "aea4901f-25a3-4b34-bfe0-a2885011e23d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.815939 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aea4901f-25a3-4b34-bfe0-a2885011e23d" (UID: "aea4901f-25a3-4b34-bfe0-a2885011e23d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:25 crc kubenswrapper[5016]: E1011 07:56:25.845825 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:03b4bb79b71d5ca7792d19c4c0ee08a5e5a407ad844c087305c42dd909ee7490" Oct 11 07:56:25 crc kubenswrapper[5016]: E1011 07:56:25.846010 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:03b4bb79b71d5ca7792d19c4c0ee08a5e5a407ad844c087305c42dd909ee7490,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65fh578h5c8h64dh5f8h5c9h567hf8hfh649h5bch67h65bh5c8h57dh6bh564h644h9ch5d7hb4h64h56ch565h586h646h74h67dh688h67dh59dh666q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csbkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-76ff89bf89-rvq2g_openstack(cf835037-4e3d-4b3f-80bc-7629cfd8da5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 07:56:25 crc kubenswrapper[5016]: E1011 07:56:25.848672 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:03b4bb79b71d5ca7792d19c4c0ee08a5e5a407ad844c087305c42dd909ee7490\\\"\"]" pod="openstack/horizon-76ff89bf89-rvq2g" podUID="cf835037-4e3d-4b3f-80bc-7629cfd8da5c" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.886726 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwfbm\" (UniqueName: \"kubernetes.io/projected/aea4901f-25a3-4b34-bfe0-a2885011e23d-kube-api-access-bwfbm\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.887223 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.887238 5016 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.887250 5016 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.887263 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:25 crc kubenswrapper[5016]: I1011 07:56:25.887275 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aea4901f-25a3-4b34-bfe0-a2885011e23d-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:26 crc kubenswrapper[5016]: E1011 07:56:26.107325 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon@sha256:03b4bb79b71d5ca7792d19c4c0ee08a5e5a407ad844c087305c42dd909ee7490" Oct 11 07:56:26 crc kubenswrapper[5016]: E1011 07:56:26.107462 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon@sha256:03b4bb79b71d5ca7792d19c4c0ee08a5e5a407ad844c087305c42dd909ee7490,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n579h6chfbhbbh5b6h95h59bh554h659h675h56bhd7h95h54ch585h8dh54dh697h666h56bh7fhffh4h58ch5cbh68dh54bh6ch574h5dfh6fh646q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmh8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7bfdfffbc7-ft2dc_openstack(1ec7360b-0d93-4691-aaad-9f3994cc00d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 07:56:26 crc kubenswrapper[5016]: E1011 07:56:26.109472 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon@sha256:03b4bb79b71d5ca7792d19c4c0ee08a5e5a407ad844c087305c42dd909ee7490\\\"\"]" pod="openstack/horizon-7bfdfffbc7-ft2dc" podUID="1ec7360b-0d93-4691-aaad-9f3994cc00d7" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.112975 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h6l55" event={"ID":"aea4901f-25a3-4b34-bfe0-a2885011e23d","Type":"ContainerDied","Data":"2689d9c97faef0b31f37117c51b83556803f49826de3fe232c0e4f2ee9bb2bbe"} Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.113040 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2689d9c97faef0b31f37117c51b83556803f49826de3fe232c0e4f2ee9bb2bbe" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.112999 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h6l55" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.885717 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h6l55"] Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.891792 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h6l55"] Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.987400 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nldfs"] Oct 11 07:56:26 crc kubenswrapper[5016]: E1011 07:56:26.987835 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea4901f-25a3-4b34-bfe0-a2885011e23d" containerName="keystone-bootstrap" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.987856 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea4901f-25a3-4b34-bfe0-a2885011e23d" containerName="keystone-bootstrap" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.988027 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea4901f-25a3-4b34-bfe0-a2885011e23d" containerName="keystone-bootstrap" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.988622 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.990458 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.990709 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.990873 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 07:56:26 crc kubenswrapper[5016]: I1011 07:56:26.991350 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zk98n" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:26.997606 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nldfs"] Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.004475 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-combined-ca-bundle\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.004524 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-fernet-keys\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.004549 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-credential-keys\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.004602 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-scripts\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.004642 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-config-data\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.004685 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmh6v\" (UniqueName: \"kubernetes.io/projected/823dcaff-824b-4313-93d8-91967861aeca-kube-api-access-pmh6v\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.105568 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-scripts\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.105634 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-config-data\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.105675 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmh6v\" (UniqueName: \"kubernetes.io/projected/823dcaff-824b-4313-93d8-91967861aeca-kube-api-access-pmh6v\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.105744 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-combined-ca-bundle\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.105767 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-fernet-keys\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.105785 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-credential-keys\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.112432 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-credential-keys\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.112519 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-fernet-keys\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.112713 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-scripts\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.113303 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-combined-ca-bundle\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.114132 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-config-data\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.130822 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmh6v\" (UniqueName: \"kubernetes.io/projected/823dcaff-824b-4313-93d8-91967861aeca-kube-api-access-pmh6v\") pod \"keystone-bootstrap-nldfs\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.145163 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea4901f-25a3-4b34-bfe0-a2885011e23d" path="/var/lib/kubelet/pods/aea4901f-25a3-4b34-bfe0-a2885011e23d/volumes" Oct 11 07:56:27 crc kubenswrapper[5016]: I1011 07:56:27.310515 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:27 crc kubenswrapper[5016]: E1011 07:56:27.381069 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api@sha256:59448516174fc3bab679b9a8dd62cb9a9d16b5734aadbeb98e960e3b7c79bd22" Oct 11 07:56:27 crc kubenswrapper[5016]: E1011 07:56:27.381236 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api@sha256:59448516174fc3bab679b9a8dd62cb9a9d16b5734aadbeb98e960e3b7c79bd22,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f9n4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-w78ms_openstack(b78f643d-d3c2-4cf1-8bb3-ee749e569273): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 07:56:27 crc kubenswrapper[5016]: E1011 07:56:27.382461 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-w78ms" podUID="b78f643d-d3c2-4cf1-8bb3-ee749e569273" Oct 11 07:56:28 crc kubenswrapper[5016]: E1011 07:56:28.157684 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api@sha256:59448516174fc3bab679b9a8dd62cb9a9d16b5734aadbeb98e960e3b7c79bd22\\\"\"" pod="openstack/placement-db-sync-w78ms" podUID="b78f643d-d3c2-4cf1-8bb3-ee749e569273" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.088229 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.152324 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-dns-svc\") pod \"c213bf50-5935-48bb-be54-2e1396bc6e06\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.152482 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-sb\") pod \"c213bf50-5935-48bb-be54-2e1396bc6e06\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.152599 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-nb\") pod \"c213bf50-5935-48bb-be54-2e1396bc6e06\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.152629 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-config\") pod \"c213bf50-5935-48bb-be54-2e1396bc6e06\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.152673 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72nj4\" (UniqueName: \"kubernetes.io/projected/c213bf50-5935-48bb-be54-2e1396bc6e06-kube-api-access-72nj4\") pod \"c213bf50-5935-48bb-be54-2e1396bc6e06\" (UID: \"c213bf50-5935-48bb-be54-2e1396bc6e06\") " Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.175910 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c213bf50-5935-48bb-be54-2e1396bc6e06-kube-api-access-72nj4" (OuterVolumeSpecName: "kube-api-access-72nj4") pod "c213bf50-5935-48bb-be54-2e1396bc6e06" (UID: "c213bf50-5935-48bb-be54-2e1396bc6e06"). InnerVolumeSpecName "kube-api-access-72nj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.176630 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" event={"ID":"c213bf50-5935-48bb-be54-2e1396bc6e06","Type":"ContainerDied","Data":"e6200759d2e02f1dbdb77d389f685bb9c944258bb001ef891bfc962680c27bda"} Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.176702 5016 scope.go:117] "RemoveContainer" containerID="771deb6c67292b4198ad4ea96f2b0f16331d4e77e89e4cadcd7a8338abfe354f" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.176798 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.195700 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c213bf50-5935-48bb-be54-2e1396bc6e06" (UID: "c213bf50-5935-48bb-be54-2e1396bc6e06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.201154 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-config" (OuterVolumeSpecName: "config") pod "c213bf50-5935-48bb-be54-2e1396bc6e06" (UID: "c213bf50-5935-48bb-be54-2e1396bc6e06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.205375 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c213bf50-5935-48bb-be54-2e1396bc6e06" (UID: "c213bf50-5935-48bb-be54-2e1396bc6e06"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.206918 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c213bf50-5935-48bb-be54-2e1396bc6e06" (UID: "c213bf50-5935-48bb-be54-2e1396bc6e06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.255307 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72nj4\" (UniqueName: \"kubernetes.io/projected/c213bf50-5935-48bb-be54-2e1396bc6e06-kube-api-access-72nj4\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.255357 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.255369 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.255379 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.255389 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c213bf50-5935-48bb-be54-2e1396bc6e06-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.429278 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-df49866-g5nkl"] Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.511751 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-pp672"] Oct 11 07:56:29 crc kubenswrapper[5016]: I1011 07:56:29.521132 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc9d58d7-pp672"] Oct 11 07:56:30 crc kubenswrapper[5016]: E1011 07:56:30.112499 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f" Oct 11 07:56:30 crc kubenswrapper[5016]: E1011 07:56:30.112674 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7p95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-xqmfx_openstack(8ebaa0ef-dce1-4ff4-a51c-69435ca86699): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 07:56:30 crc kubenswrapper[5016]: E1011 07:56:30.113881 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-xqmfx" podUID="8ebaa0ef-dce1-4ff4-a51c-69435ca86699" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.186380 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76ff89bf89-rvq2g" event={"ID":"cf835037-4e3d-4b3f-80bc-7629cfd8da5c","Type":"ContainerDied","Data":"e52a1b59ce7bfe2f7dbaa10ed8f663f4bfcdeef376816a96a4cd9db0e90a6dd7"} Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.186719 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e52a1b59ce7bfe2f7dbaa10ed8f663f4bfcdeef376816a96a4cd9db0e90a6dd7" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.188501 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bfdfffbc7-ft2dc" event={"ID":"1ec7360b-0d93-4691-aaad-9f3994cc00d7","Type":"ContainerDied","Data":"fb6ed142ad962d5a2d040b1db7fb659a0fa4f928e5eb68bb3ea8d88ee62eeb8f"} Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.188529 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb6ed142ad962d5a2d040b1db7fb659a0fa4f928e5eb68bb3ea8d88ee62eeb8f" Oct 11 07:56:30 crc kubenswrapper[5016]: E1011 07:56:30.193969 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:85c75d60e1bd2f8a9ea0a2bb21a8df64c0a6f7b504cc1a05a355981d4b90e92f\\\"\"" pod="openstack/cinder-db-sync-xqmfx" podUID="8ebaa0ef-dce1-4ff4-a51c-69435ca86699" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.194173 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.198145 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.271968 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-scripts\") pod \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.272316 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec7360b-0d93-4691-aaad-9f3994cc00d7-logs\") pod \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.272455 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-config-data\") pod \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.272553 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmh8m\" (UniqueName: \"kubernetes.io/projected/1ec7360b-0d93-4691-aaad-9f3994cc00d7-kube-api-access-mmh8m\") pod \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.272683 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csbkt\" (UniqueName: \"kubernetes.io/projected/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-kube-api-access-csbkt\") pod \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.272787 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-config-data\") pod \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.272932 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ec7360b-0d93-4691-aaad-9f3994cc00d7-horizon-secret-key\") pod \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\" (UID: \"1ec7360b-0d93-4691-aaad-9f3994cc00d7\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.273157 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-logs\") pod \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.273406 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-horizon-secret-key\") pod \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.275093 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-scripts\") pod \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\" (UID: \"cf835037-4e3d-4b3f-80bc-7629cfd8da5c\") " Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.277452 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-scripts" (OuterVolumeSpecName: "scripts") pod "1ec7360b-0d93-4691-aaad-9f3994cc00d7" (UID: "1ec7360b-0d93-4691-aaad-9f3994cc00d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.278229 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec7360b-0d93-4691-aaad-9f3994cc00d7-logs" (OuterVolumeSpecName: "logs") pod "1ec7360b-0d93-4691-aaad-9f3994cc00d7" (UID: "1ec7360b-0d93-4691-aaad-9f3994cc00d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.279943 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-config-data" (OuterVolumeSpecName: "config-data") pod "1ec7360b-0d93-4691-aaad-9f3994cc00d7" (UID: "1ec7360b-0d93-4691-aaad-9f3994cc00d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.281331 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-scripts" (OuterVolumeSpecName: "scripts") pod "cf835037-4e3d-4b3f-80bc-7629cfd8da5c" (UID: "cf835037-4e3d-4b3f-80bc-7629cfd8da5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.281551 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-config-data" (OuterVolumeSpecName: "config-data") pod "cf835037-4e3d-4b3f-80bc-7629cfd8da5c" (UID: "cf835037-4e3d-4b3f-80bc-7629cfd8da5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.282394 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-logs" (OuterVolumeSpecName: "logs") pod "cf835037-4e3d-4b3f-80bc-7629cfd8da5c" (UID: "cf835037-4e3d-4b3f-80bc-7629cfd8da5c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.286498 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec7360b-0d93-4691-aaad-9f3994cc00d7-kube-api-access-mmh8m" (OuterVolumeSpecName: "kube-api-access-mmh8m") pod "1ec7360b-0d93-4691-aaad-9f3994cc00d7" (UID: "1ec7360b-0d93-4691-aaad-9f3994cc00d7"). InnerVolumeSpecName "kube-api-access-mmh8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.286707 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-kube-api-access-csbkt" (OuterVolumeSpecName: "kube-api-access-csbkt") pod "cf835037-4e3d-4b3f-80bc-7629cfd8da5c" (UID: "cf835037-4e3d-4b3f-80bc-7629cfd8da5c"). InnerVolumeSpecName "kube-api-access-csbkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.288524 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ec7360b-0d93-4691-aaad-9f3994cc00d7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1ec7360b-0d93-4691-aaad-9f3994cc00d7" (UID: "1ec7360b-0d93-4691-aaad-9f3994cc00d7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.291690 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "cf835037-4e3d-4b3f-80bc-7629cfd8da5c" (UID: "cf835037-4e3d-4b3f-80bc-7629cfd8da5c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377692 5016 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ec7360b-0d93-4691-aaad-9f3994cc00d7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377724 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377735 5016 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377744 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377752 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377759 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ec7360b-0d93-4691-aaad-9f3994cc00d7-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377767 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ec7360b-0d93-4691-aaad-9f3994cc00d7-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377775 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmh8m\" (UniqueName: \"kubernetes.io/projected/1ec7360b-0d93-4691-aaad-9f3994cc00d7-kube-api-access-mmh8m\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377784 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csbkt\" (UniqueName: \"kubernetes.io/projected/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-kube-api-access-csbkt\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.377792 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf835037-4e3d-4b3f-80bc-7629cfd8da5c-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.727561 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-dc9d58d7-pp672" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Oct 11 07:56:30 crc kubenswrapper[5016]: E1011 07:56:30.868863 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:cbe345acb37e57986ecf6685d28c72d0e639bdb493a18e9d3ba947d6c3a16384" Oct 11 07:56:30 crc kubenswrapper[5016]: E1011 07:56:30.869017 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:cbe345acb37e57986ecf6685d28c72d0e639bdb493a18e9d3ba947d6c3a16384,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfpf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-92qrj_openstack(d426ddd3-5eae-4816-a141-32b614642d39): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 07:56:30 crc kubenswrapper[5016]: E1011 07:56:30.871773 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-92qrj" podUID="d426ddd3-5eae-4816-a141-32b614642d39" Oct 11 07:56:30 crc kubenswrapper[5016]: W1011 07:56:30.881429 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod771aebc7_25b0_45ef_bbd4_ed6c367b998b.slice/crio-c326779a89292a9eae3d03eff337c85b93a2a045888e097239d42014506ce0fd WatchSource:0}: Error finding container c326779a89292a9eae3d03eff337c85b93a2a045888e097239d42014506ce0fd: Status 404 returned error can't find the container with id c326779a89292a9eae3d03eff337c85b93a2a045888e097239d42014506ce0fd Oct 11 07:56:30 crc kubenswrapper[5016]: I1011 07:56:30.923455 5016 scope.go:117] "RemoveContainer" containerID="59c5fc7717bcb2fe3c87521ee9e80c198fad56496e34e9507f8d58d3ea5bd065" Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.149084 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" path="/var/lib/kubelet/pods/c213bf50-5935-48bb-be54-2e1396bc6e06/volumes" Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.160872 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65987df486-lvrh6"] Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.204446 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65987df486-lvrh6" event={"ID":"e3e9db46-849a-4957-a6ff-5a05cb5c9744","Type":"ContainerStarted","Data":"19bc8d40c8970bfafa0e2ee89966db93f486ae7bd01d5cd0ab124995d4701498"} Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.214971 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-df49866-g5nkl" event={"ID":"771aebc7-25b0-45ef-bbd4-ed6c367b998b","Type":"ContainerStarted","Data":"c326779a89292a9eae3d03eff337c85b93a2a045888e097239d42014506ce0fd"} Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.217610 5016 generic.go:334] "Generic (PLEG): container finished" podID="492cebf0-6a35-4ce7-8c85-2298fd8ae390" containerID="4e292029c9be340a8aa5bfb997745320e4060a28e6ade7a360225c2ba9aa8f75" exitCode=0 Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.217683 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rfhnk" event={"ID":"492cebf0-6a35-4ce7-8c85-2298fd8ae390","Type":"ContainerDied","Data":"4e292029c9be340a8aa5bfb997745320e4060a28e6ade7a360225c2ba9aa8f75"} Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.222145 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bfdfffbc7-ft2dc" Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.222610 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76ff89bf89-rvq2g" Oct 11 07:56:31 crc kubenswrapper[5016]: E1011 07:56:31.223257 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:cbe345acb37e57986ecf6685d28c72d0e639bdb493a18e9d3ba947d6c3a16384\\\"\"" pod="openstack/barbican-db-sync-92qrj" podUID="d426ddd3-5eae-4816-a141-32b614642d39" Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.456079 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nldfs"] Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.463464 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76ff89bf89-rvq2g"] Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.469854 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76ff89bf89-rvq2g"] Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.537997 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bfdfffbc7-ft2dc"] Oct 11 07:56:31 crc kubenswrapper[5016]: I1011 07:56:31.545978 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bfdfffbc7-ft2dc"] Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.234674 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65987df486-lvrh6" event={"ID":"e3e9db46-849a-4957-a6ff-5a05cb5c9744","Type":"ContainerStarted","Data":"fd83719aa2cd4a93bd705b5be1e6e4707c56f0f495919c994dff856fb9d11acd"} Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.236259 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65987df486-lvrh6" event={"ID":"e3e9db46-849a-4957-a6ff-5a05cb5c9744","Type":"ContainerStarted","Data":"c84493a76cf4dc44d54b17a6bf4ebb1f82e709e725aec8d4a03c943c323dae45"} Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.244848 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-df49866-g5nkl" event={"ID":"771aebc7-25b0-45ef-bbd4-ed6c367b998b","Type":"ContainerStarted","Data":"0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6"} Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.244909 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-df49866-g5nkl" event={"ID":"771aebc7-25b0-45ef-bbd4-ed6c367b998b","Type":"ContainerStarted","Data":"88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf"} Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.250418 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerStarted","Data":"60d3ffb0056dcca585f6f4dbcc8321cc96da604e2a4c1ea74fbf53d9ad7cc790"} Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.252091 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nldfs" event={"ID":"823dcaff-824b-4313-93d8-91967861aeca","Type":"ContainerStarted","Data":"1e204051cd4f6c35d0f123c72f0c8b312ba7a9350b78d47e0dd1cee367fc8615"} Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.252119 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nldfs" event={"ID":"823dcaff-824b-4313-93d8-91967861aeca","Type":"ContainerStarted","Data":"8a03d852fe52acf4da283ddcec9437367f9bedc9ba7f1e0f33d29cebb24f1b6e"} Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.254845 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6679b846c9-4jxzp" event={"ID":"4acf875d-ca40-47ff-a2e9-cdf09c447232","Type":"ContainerStarted","Data":"5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7"} Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.254877 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6679b846c9-4jxzp" event={"ID":"4acf875d-ca40-47ff-a2e9-cdf09c447232","Type":"ContainerStarted","Data":"a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf"} Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.255112 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6679b846c9-4jxzp" podUID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerName="horizon" containerID="cri-o://a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf" gracePeriod=30 Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.255121 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6679b846c9-4jxzp" podUID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerName="horizon-log" containerID="cri-o://5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7" gracePeriod=30 Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.264994 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-65987df486-lvrh6" podStartSLOduration=31.264979428 podStartE2EDuration="31.264979428s" podCreationTimestamp="2025-10-11 07:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:32.260304443 +0000 UTC m=+980.160760389" watchObservedRunningTime="2025-10-11 07:56:32.264979428 +0000 UTC m=+980.165435374" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.287478 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nldfs" podStartSLOduration=6.287464813 podStartE2EDuration="6.287464813s" podCreationTimestamp="2025-10-11 07:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:32.286847337 +0000 UTC m=+980.187303283" watchObservedRunningTime="2025-10-11 07:56:32.287464813 +0000 UTC m=+980.187920759" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.304343 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6679b846c9-4jxzp" podStartSLOduration=2.671839984 podStartE2EDuration="40.304326166s" podCreationTimestamp="2025-10-11 07:55:52 +0000 UTC" firstStartedPulling="2025-10-11 07:55:53.256191671 +0000 UTC m=+941.156647617" lastFinishedPulling="2025-10-11 07:56:30.888677853 +0000 UTC m=+978.789133799" observedRunningTime="2025-10-11 07:56:32.301803729 +0000 UTC m=+980.202259675" watchObservedRunningTime="2025-10-11 07:56:32.304326166 +0000 UTC m=+980.204782102" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.331984 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-df49866-g5nkl" podStartSLOduration=31.33196897 podStartE2EDuration="31.33196897s" podCreationTimestamp="2025-10-11 07:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:32.330414188 +0000 UTC m=+980.230870134" watchObservedRunningTime="2025-10-11 07:56:32.33196897 +0000 UTC m=+980.232424916" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.632576 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.656543 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.724072 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cc6x\" (UniqueName: \"kubernetes.io/projected/492cebf0-6a35-4ce7-8c85-2298fd8ae390-kube-api-access-6cc6x\") pod \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.724306 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-config\") pod \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.724352 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-combined-ca-bundle\") pod \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\" (UID: \"492cebf0-6a35-4ce7-8c85-2298fd8ae390\") " Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.729852 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492cebf0-6a35-4ce7-8c85-2298fd8ae390-kube-api-access-6cc6x" (OuterVolumeSpecName: "kube-api-access-6cc6x") pod "492cebf0-6a35-4ce7-8c85-2298fd8ae390" (UID: "492cebf0-6a35-4ce7-8c85-2298fd8ae390"). InnerVolumeSpecName "kube-api-access-6cc6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.766254 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "492cebf0-6a35-4ce7-8c85-2298fd8ae390" (UID: "492cebf0-6a35-4ce7-8c85-2298fd8ae390"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.773777 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-config" (OuterVolumeSpecName: "config") pod "492cebf0-6a35-4ce7-8c85-2298fd8ae390" (UID: "492cebf0-6a35-4ce7-8c85-2298fd8ae390"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.825592 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cc6x\" (UniqueName: \"kubernetes.io/projected/492cebf0-6a35-4ce7-8c85-2298fd8ae390-kube-api-access-6cc6x\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.825770 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:32 crc kubenswrapper[5016]: I1011 07:56:32.825823 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/492cebf0-6a35-4ce7-8c85-2298fd8ae390-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.150774 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec7360b-0d93-4691-aaad-9f3994cc00d7" path="/var/lib/kubelet/pods/1ec7360b-0d93-4691-aaad-9f3994cc00d7/volumes" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.151733 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf835037-4e3d-4b3f-80bc-7629cfd8da5c" path="/var/lib/kubelet/pods/cf835037-4e3d-4b3f-80bc-7629cfd8da5c/volumes" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.270986 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rfhnk" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.270999 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rfhnk" event={"ID":"492cebf0-6a35-4ce7-8c85-2298fd8ae390","Type":"ContainerDied","Data":"35b602e1aea260977edbecf212f1396d57d7d31fa3e39bf7613fe3d0bad17e0e"} Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.271054 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35b602e1aea260977edbecf212f1396d57d7d31fa3e39bf7613fe3d0bad17e0e" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.276465 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerStarted","Data":"0a94574dbcd6be5b2e4251e34cfd6305e007b08fe4785f06b109089774d9900a"} Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.519975 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f7d8dc7ff-cxklp"] Oct 11 07:56:33 crc kubenswrapper[5016]: E1011 07:56:33.520301 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492cebf0-6a35-4ce7-8c85-2298fd8ae390" containerName="neutron-db-sync" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.520313 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="492cebf0-6a35-4ce7-8c85-2298fd8ae390" containerName="neutron-db-sync" Oct 11 07:56:33 crc kubenswrapper[5016]: E1011 07:56:33.520333 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="init" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.520340 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="init" Oct 11 07:56:33 crc kubenswrapper[5016]: E1011 07:56:33.520350 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="dnsmasq-dns" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.520357 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="dnsmasq-dns" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.520499 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c213bf50-5935-48bb-be54-2e1396bc6e06" containerName="dnsmasq-dns" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.520548 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="492cebf0-6a35-4ce7-8c85-2298fd8ae390" containerName="neutron-db-sync" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.533443 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.545519 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7d8dc7ff-cxklp"] Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.602072 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59dc8c6b68-jd4p4"] Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.613173 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.635629 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.638059 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-g2sfr" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.639471 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.641312 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.646276 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-config\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.646416 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kvrn\" (UniqueName: \"kubernetes.io/projected/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-kube-api-access-4kvrn\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.646515 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.646590 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-dns-svc\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.646754 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-nb\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.678865 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59dc8c6b68-jd4p4"] Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.749551 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-config\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.749819 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8g97\" (UniqueName: \"kubernetes.io/projected/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-kube-api-access-s8g97\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.749874 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kvrn\" (UniqueName: \"kubernetes.io/projected/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-kube-api-access-4kvrn\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.749911 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.750153 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-ovndb-tls-certs\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.750196 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-httpd-config\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.750432 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-dns-svc\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.750469 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-config\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.750545 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-combined-ca-bundle\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.750597 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-nb\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.751471 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-nb\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.751638 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-sb\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.763779 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-config\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.766105 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-dns-svc\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.811328 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kvrn\" (UniqueName: \"kubernetes.io/projected/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-kube-api-access-4kvrn\") pod \"dnsmasq-dns-7f7d8dc7ff-cxklp\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.860543 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.866784 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8g97\" (UniqueName: \"kubernetes.io/projected/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-kube-api-access-s8g97\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.866846 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-ovndb-tls-certs\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.866870 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-httpd-config\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.866886 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-config\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.866930 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-combined-ca-bundle\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.876564 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-httpd-config\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.877515 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-config\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.891847 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8g97\" (UniqueName: \"kubernetes.io/projected/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-kube-api-access-s8g97\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.896174 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-ovndb-tls-certs\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:33 crc kubenswrapper[5016]: I1011 07:56:33.951567 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-combined-ca-bundle\") pod \"neutron-59dc8c6b68-jd4p4\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:34 crc kubenswrapper[5016]: I1011 07:56:34.234234 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:34 crc kubenswrapper[5016]: I1011 07:56:34.236945 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7d8dc7ff-cxklp"] Oct 11 07:56:34 crc kubenswrapper[5016]: I1011 07:56:34.319582 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" event={"ID":"94fe6cdc-7b3c-4643-bc8a-664a9c12590e","Type":"ContainerStarted","Data":"f45197384b64137989d8e2a25c7f1a5b885afe007a1abd74382ba9498fe7e7f8"} Oct 11 07:56:34 crc kubenswrapper[5016]: I1011 07:56:34.958032 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59dc8c6b68-jd4p4"] Oct 11 07:56:35 crc kubenswrapper[5016]: I1011 07:56:35.333118 5016 generic.go:334] "Generic (PLEG): container finished" podID="94fe6cdc-7b3c-4643-bc8a-664a9c12590e" containerID="662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884" exitCode=0 Oct 11 07:56:35 crc kubenswrapper[5016]: I1011 07:56:35.333489 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" event={"ID":"94fe6cdc-7b3c-4643-bc8a-664a9c12590e","Type":"ContainerDied","Data":"662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884"} Oct 11 07:56:35 crc kubenswrapper[5016]: I1011 07:56:35.337414 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dc8c6b68-jd4p4" event={"ID":"4f6cfa91-27d5-4f55-ba9f-9d61367584ca","Type":"ContainerStarted","Data":"9adfcc3ec37847bb4e7f2edfd0a4c73e1f0b21541973682b8a6a48af908d9613"} Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.049648 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c8b64649f-69xkr"] Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.055946 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.057687 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.059453 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.076323 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c8b64649f-69xkr"] Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.274495 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-config\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.274554 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-httpd-config\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.274601 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zq8b\" (UniqueName: \"kubernetes.io/projected/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-kube-api-access-7zq8b\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.275402 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-internal-tls-certs\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.275443 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-combined-ca-bundle\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.275474 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-ovndb-tls-certs\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.275498 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-public-tls-certs\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.373077 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dc8c6b68-jd4p4" event={"ID":"4f6cfa91-27d5-4f55-ba9f-9d61367584ca","Type":"ContainerStarted","Data":"19278b670332c5311619044a69cb05747fcd4e340cdfbd1e589100cf0a1e7323"} Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.376579 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-config\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.376639 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-httpd-config\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.376691 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zq8b\" (UniqueName: \"kubernetes.io/projected/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-kube-api-access-7zq8b\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.376761 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-internal-tls-certs\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.376797 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-combined-ca-bundle\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.376829 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-ovndb-tls-certs\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.376855 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-public-tls-certs\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.391444 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-config\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.392596 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-internal-tls-certs\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.393116 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-httpd-config\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.393308 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-ovndb-tls-certs\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.400686 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-public-tls-certs\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.417377 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-combined-ca-bundle\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.435427 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zq8b\" (UniqueName: \"kubernetes.io/projected/1f964a68-6c63-4aba-bf48-6b8cdb1766f2-kube-api-access-7zq8b\") pod \"neutron-5c8b64649f-69xkr\" (UID: \"1f964a68-6c63-4aba-bf48-6b8cdb1766f2\") " pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:36 crc kubenswrapper[5016]: I1011 07:56:36.675624 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:37 crc kubenswrapper[5016]: I1011 07:56:37.383296 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c8b64649f-69xkr"] Oct 11 07:56:37 crc kubenswrapper[5016]: W1011 07:56:37.394325 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f964a68_6c63_4aba_bf48_6b8cdb1766f2.slice/crio-4c99d2f5e2665133c479e0aeb937c5fe00876a6be4fed99b492db017484fb465 WatchSource:0}: Error finding container 4c99d2f5e2665133c479e0aeb937c5fe00876a6be4fed99b492db017484fb465: Status 404 returned error can't find the container with id 4c99d2f5e2665133c479e0aeb937c5fe00876a6be4fed99b492db017484fb465 Oct 11 07:56:37 crc kubenswrapper[5016]: I1011 07:56:37.395025 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dc8c6b68-jd4p4" event={"ID":"4f6cfa91-27d5-4f55-ba9f-9d61367584ca","Type":"ContainerStarted","Data":"0ff4496cc41b82abf0c84e553e0ad482d2a2d59dcbf267abe776975eb46c7085"} Oct 11 07:56:38 crc kubenswrapper[5016]: I1011 07:56:38.409113 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b64649f-69xkr" event={"ID":"1f964a68-6c63-4aba-bf48-6b8cdb1766f2","Type":"ContainerStarted","Data":"89e7db7d0884848727cdee631cf4caef9f9a161d60045e840cd0cd17dea40c4d"} Oct 11 07:56:38 crc kubenswrapper[5016]: I1011 07:56:38.409798 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b64649f-69xkr" event={"ID":"1f964a68-6c63-4aba-bf48-6b8cdb1766f2","Type":"ContainerStarted","Data":"4c99d2f5e2665133c479e0aeb937c5fe00876a6be4fed99b492db017484fb465"} Oct 11 07:56:38 crc kubenswrapper[5016]: I1011 07:56:38.410972 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" event={"ID":"94fe6cdc-7b3c-4643-bc8a-664a9c12590e","Type":"ContainerStarted","Data":"452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81"} Oct 11 07:56:38 crc kubenswrapper[5016]: I1011 07:56:38.411142 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:38 crc kubenswrapper[5016]: I1011 07:56:38.412637 5016 generic.go:334] "Generic (PLEG): container finished" podID="823dcaff-824b-4313-93d8-91967861aeca" containerID="1e204051cd4f6c35d0f123c72f0c8b312ba7a9350b78d47e0dd1cee367fc8615" exitCode=0 Oct 11 07:56:38 crc kubenswrapper[5016]: I1011 07:56:38.412689 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nldfs" event={"ID":"823dcaff-824b-4313-93d8-91967861aeca","Type":"ContainerDied","Data":"1e204051cd4f6c35d0f123c72f0c8b312ba7a9350b78d47e0dd1cee367fc8615"} Oct 11 07:56:38 crc kubenswrapper[5016]: I1011 07:56:38.412801 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:56:38 crc kubenswrapper[5016]: I1011 07:56:38.434012 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" podStartSLOduration=5.433989263 podStartE2EDuration="5.433989263s" podCreationTimestamp="2025-10-11 07:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:38.430294504 +0000 UTC m=+986.330750450" watchObservedRunningTime="2025-10-11 07:56:38.433989263 +0000 UTC m=+986.334445209" Oct 11 07:56:38 crc kubenswrapper[5016]: I1011 07:56:38.470940 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59dc8c6b68-jd4p4" podStartSLOduration=5.470922977 podStartE2EDuration="5.470922977s" podCreationTimestamp="2025-10-11 07:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:38.470732011 +0000 UTC m=+986.371187967" watchObservedRunningTime="2025-10-11 07:56:38.470922977 +0000 UTC m=+986.371378923" Oct 11 07:56:41 crc kubenswrapper[5016]: I1011 07:56:41.665839 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:41 crc kubenswrapper[5016]: I1011 07:56:41.666354 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-df49866-g5nkl" Oct 11 07:56:41 crc kubenswrapper[5016]: I1011 07:56:41.676278 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-df49866-g5nkl" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Oct 11 07:56:41 crc kubenswrapper[5016]: I1011 07:56:41.700393 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:41 crc kubenswrapper[5016]: I1011 07:56:41.700462 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:56:41 crc kubenswrapper[5016]: I1011 07:56:41.702378 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65987df486-lvrh6" podUID="e3e9db46-849a-4957-a6ff-5a05cb5c9744" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.255909 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.393340 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-fernet-keys\") pod \"823dcaff-824b-4313-93d8-91967861aeca\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.393746 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmh6v\" (UniqueName: \"kubernetes.io/projected/823dcaff-824b-4313-93d8-91967861aeca-kube-api-access-pmh6v\") pod \"823dcaff-824b-4313-93d8-91967861aeca\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.393773 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-scripts\") pod \"823dcaff-824b-4313-93d8-91967861aeca\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.393823 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-config-data\") pod \"823dcaff-824b-4313-93d8-91967861aeca\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.393937 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-combined-ca-bundle\") pod \"823dcaff-824b-4313-93d8-91967861aeca\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.393987 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-credential-keys\") pod \"823dcaff-824b-4313-93d8-91967861aeca\" (UID: \"823dcaff-824b-4313-93d8-91967861aeca\") " Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.400865 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-scripts" (OuterVolumeSpecName: "scripts") pod "823dcaff-824b-4313-93d8-91967861aeca" (UID: "823dcaff-824b-4313-93d8-91967861aeca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.404920 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/823dcaff-824b-4313-93d8-91967861aeca-kube-api-access-pmh6v" (OuterVolumeSpecName: "kube-api-access-pmh6v") pod "823dcaff-824b-4313-93d8-91967861aeca" (UID: "823dcaff-824b-4313-93d8-91967861aeca"). InnerVolumeSpecName "kube-api-access-pmh6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.405026 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "823dcaff-824b-4313-93d8-91967861aeca" (UID: "823dcaff-824b-4313-93d8-91967861aeca"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.407760 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "823dcaff-824b-4313-93d8-91967861aeca" (UID: "823dcaff-824b-4313-93d8-91967861aeca"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.425411 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-config-data" (OuterVolumeSpecName: "config-data") pod "823dcaff-824b-4313-93d8-91967861aeca" (UID: "823dcaff-824b-4313-93d8-91967861aeca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.433759 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "823dcaff-824b-4313-93d8-91967861aeca" (UID: "823dcaff-824b-4313-93d8-91967861aeca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.462446 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerStarted","Data":"c68d485ed0a380fbd553c95237d0af36aaccebbef647ed078b59696595f05b16"} Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.464322 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nldfs" event={"ID":"823dcaff-824b-4313-93d8-91967861aeca","Type":"ContainerDied","Data":"8a03d852fe52acf4da283ddcec9437367f9bedc9ba7f1e0f33d29cebb24f1b6e"} Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.464351 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a03d852fe52acf4da283ddcec9437367f9bedc9ba7f1e0f33d29cebb24f1b6e" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.464435 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nldfs" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.466251 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w78ms" event={"ID":"b78f643d-d3c2-4cf1-8bb3-ee749e569273","Type":"ContainerStarted","Data":"1dd91bd59bc994e1bc4fb307fe0ff000420760431994b4742de58b81b151ecd6"} Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.474825 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b64649f-69xkr" event={"ID":"1f964a68-6c63-4aba-bf48-6b8cdb1766f2","Type":"ContainerStarted","Data":"7fc77cfc787f297645c27edcacac3201471880dcf537e9d6836692189cf8d500"} Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.475079 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.485279 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-w78ms" podStartSLOduration=1.7599191969999999 podStartE2EDuration="50.485261782s" podCreationTimestamp="2025-10-11 07:55:52 +0000 UTC" firstStartedPulling="2025-10-11 07:55:53.381829918 +0000 UTC m=+941.282285874" lastFinishedPulling="2025-10-11 07:56:42.107172513 +0000 UTC m=+990.007628459" observedRunningTime="2025-10-11 07:56:42.47890084 +0000 UTC m=+990.379356786" watchObservedRunningTime="2025-10-11 07:56:42.485261782 +0000 UTC m=+990.385717738" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.496419 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.496458 5016 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.496469 5016 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.496480 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmh6v\" (UniqueName: \"kubernetes.io/projected/823dcaff-824b-4313-93d8-91967861aeca-kube-api-access-pmh6v\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.496492 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.496503 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/823dcaff-824b-4313-93d8-91967861aeca-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:42 crc kubenswrapper[5016]: I1011 07:56:42.504706 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c8b64649f-69xkr" podStartSLOduration=6.504688354 podStartE2EDuration="6.504688354s" podCreationTimestamp="2025-10-11 07:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:42.500404669 +0000 UTC m=+990.400860615" watchObservedRunningTime="2025-10-11 07:56:42.504688354 +0000 UTC m=+990.405144300" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.531001 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-67cbf6496d-vrr6z"] Oct 11 07:56:43 crc kubenswrapper[5016]: E1011 07:56:43.532164 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="823dcaff-824b-4313-93d8-91967861aeca" containerName="keystone-bootstrap" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.532349 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="823dcaff-824b-4313-93d8-91967861aeca" containerName="keystone-bootstrap" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.532616 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="823dcaff-824b-4313-93d8-91967861aeca" containerName="keystone-bootstrap" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.533339 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.541103 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67cbf6496d-vrr6z"] Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.544289 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.544489 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.544595 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.544825 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.544947 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.549101 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zk98n" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.717220 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-config-data\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.717949 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-scripts\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.718023 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-combined-ca-bundle\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.718174 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-internal-tls-certs\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.718291 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2gmm\" (UniqueName: \"kubernetes.io/projected/5847f047-0407-44fd-9c84-3599fbaac974-kube-api-access-q2gmm\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.718344 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-public-tls-certs\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.718442 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-credential-keys\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.718503 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-fernet-keys\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.820243 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-combined-ca-bundle\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.820319 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-internal-tls-certs\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.820375 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2gmm\" (UniqueName: \"kubernetes.io/projected/5847f047-0407-44fd-9c84-3599fbaac974-kube-api-access-q2gmm\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.820396 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-public-tls-certs\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.820453 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-credential-keys\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.820481 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-fernet-keys\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.820538 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-config-data\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.820557 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-scripts\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.828749 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-scripts\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.829189 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-credential-keys\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.829249 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-fernet-keys\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.829404 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-combined-ca-bundle\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.830355 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-config-data\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.834064 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-internal-tls-certs\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.843878 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2gmm\" (UniqueName: \"kubernetes.io/projected/5847f047-0407-44fd-9c84-3599fbaac974-kube-api-access-q2gmm\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.852175 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5847f047-0407-44fd-9c84-3599fbaac974-public-tls-certs\") pod \"keystone-67cbf6496d-vrr6z\" (UID: \"5847f047-0407-44fd-9c84-3599fbaac974\") " pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.862773 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.863328 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.977200 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b99bccc6c-868d2"] Oct 11 07:56:43 crc kubenswrapper[5016]: I1011 07:56:43.977466 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" podUID="088fb899-923b-4215-9d9a-22bef9a6891b" containerName="dnsmasq-dns" containerID="cri-o://a0256d917e8a0866c9f26abee6b9dbb368b46db21791e06f3e766df0b183095b" gracePeriod=10 Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.507105 5016 generic.go:334] "Generic (PLEG): container finished" podID="088fb899-923b-4215-9d9a-22bef9a6891b" containerID="a0256d917e8a0866c9f26abee6b9dbb368b46db21791e06f3e766df0b183095b" exitCode=0 Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.507173 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" event={"ID":"088fb899-923b-4215-9d9a-22bef9a6891b","Type":"ContainerDied","Data":"a0256d917e8a0866c9f26abee6b9dbb368b46db21791e06f3e766df0b183095b"} Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.649014 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-67cbf6496d-vrr6z"] Oct 11 07:56:44 crc kubenswrapper[5016]: W1011 07:56:44.652868 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5847f047_0407_44fd_9c84_3599fbaac974.slice/crio-72ac8a369963b62d503ace3e3fb7a5b1da83dee12e555c228970a8b262e3d5a9 WatchSource:0}: Error finding container 72ac8a369963b62d503ace3e3fb7a5b1da83dee12e555c228970a8b262e3d5a9: Status 404 returned error can't find the container with id 72ac8a369963b62d503ace3e3fb7a5b1da83dee12e555c228970a8b262e3d5a9 Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.773557 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.945481 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-config\") pod \"088fb899-923b-4215-9d9a-22bef9a6891b\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.945805 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-sb\") pod \"088fb899-923b-4215-9d9a-22bef9a6891b\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.945913 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hq75\" (UniqueName: \"kubernetes.io/projected/088fb899-923b-4215-9d9a-22bef9a6891b-kube-api-access-5hq75\") pod \"088fb899-923b-4215-9d9a-22bef9a6891b\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.945946 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-dns-svc\") pod \"088fb899-923b-4215-9d9a-22bef9a6891b\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.945981 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-nb\") pod \"088fb899-923b-4215-9d9a-22bef9a6891b\" (UID: \"088fb899-923b-4215-9d9a-22bef9a6891b\") " Oct 11 07:56:44 crc kubenswrapper[5016]: I1011 07:56:44.954553 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088fb899-923b-4215-9d9a-22bef9a6891b-kube-api-access-5hq75" (OuterVolumeSpecName: "kube-api-access-5hq75") pod "088fb899-923b-4215-9d9a-22bef9a6891b" (UID: "088fb899-923b-4215-9d9a-22bef9a6891b"). InnerVolumeSpecName "kube-api-access-5hq75". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.023050 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "088fb899-923b-4215-9d9a-22bef9a6891b" (UID: "088fb899-923b-4215-9d9a-22bef9a6891b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.031241 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "088fb899-923b-4215-9d9a-22bef9a6891b" (UID: "088fb899-923b-4215-9d9a-22bef9a6891b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.036356 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-config" (OuterVolumeSpecName: "config") pod "088fb899-923b-4215-9d9a-22bef9a6891b" (UID: "088fb899-923b-4215-9d9a-22bef9a6891b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.048402 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.048445 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.048461 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hq75\" (UniqueName: \"kubernetes.io/projected/088fb899-923b-4215-9d9a-22bef9a6891b-kube-api-access-5hq75\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.048473 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.077444 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "088fb899-923b-4215-9d9a-22bef9a6891b" (UID: "088fb899-923b-4215-9d9a-22bef9a6891b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.150685 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/088fb899-923b-4215-9d9a-22bef9a6891b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.516949 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67cbf6496d-vrr6z" event={"ID":"5847f047-0407-44fd-9c84-3599fbaac974","Type":"ContainerStarted","Data":"8d76df5a089cceecc598e6e746f167ed80a543e1673d1049e2f5880d78df15c6"} Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.516993 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-67cbf6496d-vrr6z" event={"ID":"5847f047-0407-44fd-9c84-3599fbaac974","Type":"ContainerStarted","Data":"72ac8a369963b62d503ace3e3fb7a5b1da83dee12e555c228970a8b262e3d5a9"} Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.518103 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.520188 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xqmfx" event={"ID":"8ebaa0ef-dce1-4ff4-a51c-69435ca86699","Type":"ContainerStarted","Data":"5b9677c74d7b7c186a7b06becf5a127e0868546ab5404944e5f5ead580de00be"} Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.524923 5016 generic.go:334] "Generic (PLEG): container finished" podID="b78f643d-d3c2-4cf1-8bb3-ee749e569273" containerID="1dd91bd59bc994e1bc4fb307fe0ff000420760431994b4742de58b81b151ecd6" exitCode=0 Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.525039 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w78ms" event={"ID":"b78f643d-d3c2-4cf1-8bb3-ee749e569273","Type":"ContainerDied","Data":"1dd91bd59bc994e1bc4fb307fe0ff000420760431994b4742de58b81b151ecd6"} Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.541870 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" event={"ID":"088fb899-923b-4215-9d9a-22bef9a6891b","Type":"ContainerDied","Data":"df4305c8158e526982c43dc85018d37c1359d884b26c79b18a3c975f9c0f953a"} Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.541919 5016 scope.go:117] "RemoveContainer" containerID="a0256d917e8a0866c9f26abee6b9dbb368b46db21791e06f3e766df0b183095b" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.542043 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b99bccc6c-868d2" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.543612 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-67cbf6496d-vrr6z" podStartSLOduration=2.543591085 podStartE2EDuration="2.543591085s" podCreationTimestamp="2025-10-11 07:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:45.533276258 +0000 UTC m=+993.433732204" watchObservedRunningTime="2025-10-11 07:56:45.543591085 +0000 UTC m=+993.444047031" Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.577712 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b99bccc6c-868d2"] Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.584667 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b99bccc6c-868d2"] Oct 11 07:56:45 crc kubenswrapper[5016]: I1011 07:56:45.586458 5016 scope.go:117] "RemoveContainer" containerID="1b3c52a99d399be20d25ab80f71bcc70e4fcc5171cf6249741193c27bb362312" Oct 11 07:56:46 crc kubenswrapper[5016]: I1011 07:56:46.592612 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xqmfx" podStartSLOduration=3.903873555 podStartE2EDuration="54.592593298s" podCreationTimestamp="2025-10-11 07:55:52 +0000 UTC" firstStartedPulling="2025-10-11 07:55:53.946705767 +0000 UTC m=+941.847161713" lastFinishedPulling="2025-10-11 07:56:44.63542551 +0000 UTC m=+992.535881456" observedRunningTime="2025-10-11 07:56:46.586683849 +0000 UTC m=+994.487139815" watchObservedRunningTime="2025-10-11 07:56:46.592593298 +0000 UTC m=+994.493049244" Oct 11 07:56:46 crc kubenswrapper[5016]: I1011 07:56:46.933136 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w78ms" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.098005 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-combined-ca-bundle\") pod \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.098084 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-config-data\") pod \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.098223 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9n4m\" (UniqueName: \"kubernetes.io/projected/b78f643d-d3c2-4cf1-8bb3-ee749e569273-kube-api-access-f9n4m\") pod \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.098278 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b78f643d-d3c2-4cf1-8bb3-ee749e569273-logs\") pod \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.098319 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-scripts\") pod \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\" (UID: \"b78f643d-d3c2-4cf1-8bb3-ee749e569273\") " Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.099748 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b78f643d-d3c2-4cf1-8bb3-ee749e569273-logs" (OuterVolumeSpecName: "logs") pod "b78f643d-d3c2-4cf1-8bb3-ee749e569273" (UID: "b78f643d-d3c2-4cf1-8bb3-ee749e569273"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.103985 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-scripts" (OuterVolumeSpecName: "scripts") pod "b78f643d-d3c2-4cf1-8bb3-ee749e569273" (UID: "b78f643d-d3c2-4cf1-8bb3-ee749e569273"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.106058 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78f643d-d3c2-4cf1-8bb3-ee749e569273-kube-api-access-f9n4m" (OuterVolumeSpecName: "kube-api-access-f9n4m") pod "b78f643d-d3c2-4cf1-8bb3-ee749e569273" (UID: "b78f643d-d3c2-4cf1-8bb3-ee749e569273"). InnerVolumeSpecName "kube-api-access-f9n4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.128386 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b78f643d-d3c2-4cf1-8bb3-ee749e569273" (UID: "b78f643d-d3c2-4cf1-8bb3-ee749e569273"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.134636 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-config-data" (OuterVolumeSpecName: "config-data") pod "b78f643d-d3c2-4cf1-8bb3-ee749e569273" (UID: "b78f643d-d3c2-4cf1-8bb3-ee749e569273"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.145838 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088fb899-923b-4215-9d9a-22bef9a6891b" path="/var/lib/kubelet/pods/088fb899-923b-4215-9d9a-22bef9a6891b/volumes" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.200767 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9n4m\" (UniqueName: \"kubernetes.io/projected/b78f643d-d3c2-4cf1-8bb3-ee749e569273-kube-api-access-f9n4m\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.200806 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b78f643d-d3c2-4cf1-8bb3-ee749e569273-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.200820 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.200832 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.200844 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78f643d-d3c2-4cf1-8bb3-ee749e569273-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.587880 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-92qrj" event={"ID":"d426ddd3-5eae-4816-a141-32b614642d39","Type":"ContainerStarted","Data":"d33bcf416d8a2b76afec069d36f43af003a1db2d506acc376e0faa41d66dc44a"} Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.589405 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w78ms" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.589451 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w78ms" event={"ID":"b78f643d-d3c2-4cf1-8bb3-ee749e569273","Type":"ContainerDied","Data":"401a198e5c06d9f5f32c01c71240cb433806691e164ab67553204b262d03223d"} Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.589474 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="401a198e5c06d9f5f32c01c71240cb433806691e164ab67553204b262d03223d" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.606421 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-92qrj" podStartSLOduration=2.252115659 podStartE2EDuration="54.606401444s" podCreationTimestamp="2025-10-11 07:55:53 +0000 UTC" firstStartedPulling="2025-10-11 07:55:54.329446314 +0000 UTC m=+942.229902270" lastFinishedPulling="2025-10-11 07:56:46.683732109 +0000 UTC m=+994.584188055" observedRunningTime="2025-10-11 07:56:47.600052244 +0000 UTC m=+995.500508190" watchObservedRunningTime="2025-10-11 07:56:47.606401444 +0000 UTC m=+995.506857390" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.661848 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6695bc58f4-lkxqb"] Oct 11 07:56:47 crc kubenswrapper[5016]: E1011 07:56:47.662249 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088fb899-923b-4215-9d9a-22bef9a6891b" containerName="init" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.662270 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="088fb899-923b-4215-9d9a-22bef9a6891b" containerName="init" Oct 11 07:56:47 crc kubenswrapper[5016]: E1011 07:56:47.662302 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088fb899-923b-4215-9d9a-22bef9a6891b" containerName="dnsmasq-dns" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.662311 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="088fb899-923b-4215-9d9a-22bef9a6891b" containerName="dnsmasq-dns" Oct 11 07:56:47 crc kubenswrapper[5016]: E1011 07:56:47.662328 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78f643d-d3c2-4cf1-8bb3-ee749e569273" containerName="placement-db-sync" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.662337 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78f643d-d3c2-4cf1-8bb3-ee749e569273" containerName="placement-db-sync" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.662543 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78f643d-d3c2-4cf1-8bb3-ee749e569273" containerName="placement-db-sync" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.662565 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="088fb899-923b-4215-9d9a-22bef9a6891b" containerName="dnsmasq-dns" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.666635 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.670382 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.670759 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.672297 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6695bc58f4-lkxqb"] Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.672692 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-btndg" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.672759 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.672866 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.810411 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnxht\" (UniqueName: \"kubernetes.io/projected/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-kube-api-access-lnxht\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.810485 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-logs\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.810520 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-public-tls-certs\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.810576 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-scripts\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.810802 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-combined-ca-bundle\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.810912 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-internal-tls-certs\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.810975 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-config-data\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.912166 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-config-data\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.912263 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnxht\" (UniqueName: \"kubernetes.io/projected/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-kube-api-access-lnxht\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.912319 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-logs\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.912353 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-public-tls-certs\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.912459 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-scripts\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.912503 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-combined-ca-bundle\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.912551 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-internal-tls-certs\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.912800 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-logs\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.921285 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-internal-tls-certs\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.921618 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-config-data\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.932173 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-public-tls-certs\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.932498 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-combined-ca-bundle\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.938666 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnxht\" (UniqueName: \"kubernetes.io/projected/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-kube-api-access-lnxht\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.938962 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9b80c42-4ba9-4f2d-96d4-b17c97c1b272-scripts\") pod \"placement-6695bc58f4-lkxqb\" (UID: \"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272\") " pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:47 crc kubenswrapper[5016]: I1011 07:56:47.984105 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:51 crc kubenswrapper[5016]: I1011 07:56:51.628815 5016 generic.go:334] "Generic (PLEG): container finished" podID="d426ddd3-5eae-4816-a141-32b614642d39" containerID="d33bcf416d8a2b76afec069d36f43af003a1db2d506acc376e0faa41d66dc44a" exitCode=0 Oct 11 07:56:51 crc kubenswrapper[5016]: I1011 07:56:51.629058 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-92qrj" event={"ID":"d426ddd3-5eae-4816-a141-32b614642d39","Type":"ContainerDied","Data":"d33bcf416d8a2b76afec069d36f43af003a1db2d506acc376e0faa41d66dc44a"} Oct 11 07:56:51 crc kubenswrapper[5016]: I1011 07:56:51.672302 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-df49866-g5nkl" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Oct 11 07:56:51 crc kubenswrapper[5016]: I1011 07:56:51.701238 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-65987df486-lvrh6" podUID="e3e9db46-849a-4957-a6ff-5a05cb5c9744" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Oct 11 07:56:52 crc kubenswrapper[5016]: I1011 07:56:52.645088 5016 generic.go:334] "Generic (PLEG): container finished" podID="8ebaa0ef-dce1-4ff4-a51c-69435ca86699" containerID="5b9677c74d7b7c186a7b06becf5a127e0868546ab5404944e5f5ead580de00be" exitCode=0 Oct 11 07:56:52 crc kubenswrapper[5016]: I1011 07:56:52.645299 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xqmfx" event={"ID":"8ebaa0ef-dce1-4ff4-a51c-69435ca86699","Type":"ContainerDied","Data":"5b9677c74d7b7c186a7b06becf5a127e0868546ab5404944e5f5ead580de00be"} Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.045809 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-92qrj" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.210013 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-combined-ca-bundle\") pod \"d426ddd3-5eae-4816-a141-32b614642d39\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.210813 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-db-sync-config-data\") pod \"d426ddd3-5eae-4816-a141-32b614642d39\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.211014 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfpf4\" (UniqueName: \"kubernetes.io/projected/d426ddd3-5eae-4816-a141-32b614642d39-kube-api-access-qfpf4\") pod \"d426ddd3-5eae-4816-a141-32b614642d39\" (UID: \"d426ddd3-5eae-4816-a141-32b614642d39\") " Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.217865 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d426ddd3-5eae-4816-a141-32b614642d39" (UID: "d426ddd3-5eae-4816-a141-32b614642d39"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.218060 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d426ddd3-5eae-4816-a141-32b614642d39-kube-api-access-qfpf4" (OuterVolumeSpecName: "kube-api-access-qfpf4") pod "d426ddd3-5eae-4816-a141-32b614642d39" (UID: "d426ddd3-5eae-4816-a141-32b614642d39"). InnerVolumeSpecName "kube-api-access-qfpf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.243322 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d426ddd3-5eae-4816-a141-32b614642d39" (UID: "d426ddd3-5eae-4816-a141-32b614642d39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.275085 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6695bc58f4-lkxqb"] Oct 11 07:56:53 crc kubenswrapper[5016]: W1011 07:56:53.279318 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9b80c42_4ba9_4f2d_96d4_b17c97c1b272.slice/crio-2190836cd68b97ee027db98dfbd1ce89224c2f8731678cc01a5afe8a8203db4c WatchSource:0}: Error finding container 2190836cd68b97ee027db98dfbd1ce89224c2f8731678cc01a5afe8a8203db4c: Status 404 returned error can't find the container with id 2190836cd68b97ee027db98dfbd1ce89224c2f8731678cc01a5afe8a8203db4c Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.313045 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.313924 5016 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d426ddd3-5eae-4816-a141-32b614642d39-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.313947 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfpf4\" (UniqueName: \"kubernetes.io/projected/d426ddd3-5eae-4816-a141-32b614642d39-kube-api-access-qfpf4\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.655179 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-92qrj" event={"ID":"d426ddd3-5eae-4816-a141-32b614642d39","Type":"ContainerDied","Data":"e9522503a62238dafbf67087e083926c5291f566a421db4807f33c0d26e7f91c"} Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.655222 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9522503a62238dafbf67087e083926c5291f566a421db4807f33c0d26e7f91c" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.655365 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-92qrj" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.666110 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerStarted","Data":"8b450fb5e54f4d2f541b3afc9309012e0910cc37c7c052b0f658aafdc07be10a"} Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.666312 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="ceilometer-central-agent" containerID="cri-o://60d3ffb0056dcca585f6f4dbcc8321cc96da604e2a4c1ea74fbf53d9ad7cc790" gracePeriod=30 Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.666610 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.666960 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="proxy-httpd" containerID="cri-o://8b450fb5e54f4d2f541b3afc9309012e0910cc37c7c052b0f658aafdc07be10a" gracePeriod=30 Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.667049 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="sg-core" containerID="cri-o://c68d485ed0a380fbd553c95237d0af36aaccebbef647ed078b59696595f05b16" gracePeriod=30 Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.667097 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="ceilometer-notification-agent" containerID="cri-o://0a94574dbcd6be5b2e4251e34cfd6305e007b08fe4785f06b109089774d9900a" gracePeriod=30 Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.672515 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6695bc58f4-lkxqb" event={"ID":"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272","Type":"ContainerStarted","Data":"d1fb3e15f11b3b63efab3be50ef60d67f99b882f25ce53774e2d623d389cd4e0"} Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.672594 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6695bc58f4-lkxqb" event={"ID":"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272","Type":"ContainerStarted","Data":"2190836cd68b97ee027db98dfbd1ce89224c2f8731678cc01a5afe8a8203db4c"} Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.714031 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.88270766 podStartE2EDuration="1m1.714011497s" podCreationTimestamp="2025-10-11 07:55:52 +0000 UTC" firstStartedPulling="2025-10-11 07:55:53.048069603 +0000 UTC m=+940.948525549" lastFinishedPulling="2025-10-11 07:56:52.87937344 +0000 UTC m=+1000.779829386" observedRunningTime="2025-10-11 07:56:53.703491385 +0000 UTC m=+1001.603947331" watchObservedRunningTime="2025-10-11 07:56:53.714011497 +0000 UTC m=+1001.614467463" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.921894 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7d69bb669b-tzvqn"] Oct 11 07:56:53 crc kubenswrapper[5016]: E1011 07:56:53.922495 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d426ddd3-5eae-4816-a141-32b614642d39" containerName="barbican-db-sync" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.922511 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d426ddd3-5eae-4816-a141-32b614642d39" containerName="barbican-db-sync" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.922712 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d426ddd3-5eae-4816-a141-32b614642d39" containerName="barbican-db-sync" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.923556 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.929211 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.929374 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.929531 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-gz92h" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.949783 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7d69bb669b-tzvqn"] Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.979798 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-766567796c-wxh7x"] Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.981245 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:53 crc kubenswrapper[5016]: I1011 07:56:53.983340 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.002397 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-766567796c-wxh7x"] Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.031769 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b7e91b-be39-4920-9227-a93b91338f97-logs\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.031813 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlkjx\" (UniqueName: \"kubernetes.io/projected/91b7e91b-be39-4920-9227-a93b91338f97-kube-api-access-dlkjx\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.031917 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91b7e91b-be39-4920-9227-a93b91338f97-config-data-custom\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.031947 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b7e91b-be39-4920-9227-a93b91338f97-config-data\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.031999 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b7e91b-be39-4920-9227-a93b91338f97-combined-ca-bundle\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.063417 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dc8d75dbf-kl8c4"] Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.065110 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.092752 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc8d75dbf-kl8c4"] Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136533 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-config-data\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136595 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-logs\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136665 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-combined-ca-bundle\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136698 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-config-data-custom\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136776 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91b7e91b-be39-4920-9227-a93b91338f97-config-data-custom\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136822 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b7e91b-be39-4920-9227-a93b91338f97-config-data\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136887 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prkkc\" (UniqueName: \"kubernetes.io/projected/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-kube-api-access-prkkc\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136925 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b7e91b-be39-4920-9227-a93b91338f97-combined-ca-bundle\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136958 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b7e91b-be39-4920-9227-a93b91338f97-logs\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.136981 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlkjx\" (UniqueName: \"kubernetes.io/projected/91b7e91b-be39-4920-9227-a93b91338f97-kube-api-access-dlkjx\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.138083 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91b7e91b-be39-4920-9227-a93b91338f97-logs\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.140485 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56fd798d48-f9v6h"] Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.146446 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91b7e91b-be39-4920-9227-a93b91338f97-combined-ca-bundle\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.147266 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.147364 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56fd798d48-f9v6h"] Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.148348 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/91b7e91b-be39-4920-9227-a93b91338f97-config-data-custom\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.148693 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.150835 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91b7e91b-be39-4920-9227-a93b91338f97-config-data\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.163843 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlkjx\" (UniqueName: \"kubernetes.io/projected/91b7e91b-be39-4920-9227-a93b91338f97-kube-api-access-dlkjx\") pod \"barbican-keystone-listener-7d69bb669b-tzvqn\" (UID: \"91b7e91b-be39-4920-9227-a93b91338f97\") " pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.241306 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-config-data-custom\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.241936 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data-custom\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242013 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prkkc\" (UniqueName: \"kubernetes.io/projected/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-kube-api-access-prkkc\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242032 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tmwf\" (UniqueName: \"kubernetes.io/projected/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-kube-api-access-5tmwf\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242074 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242094 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-logs\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242124 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242145 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242165 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-combined-ca-bundle\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242183 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dkw2\" (UniqueName: \"kubernetes.io/projected/043658b6-901d-4f17-9242-d7ab1c0cdfaf-kube-api-access-6dkw2\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242206 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-config-data\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242223 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-logs\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242241 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-dns-svc\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242281 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-combined-ca-bundle\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242299 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-config\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.242644 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-logs\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.248605 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-config-data\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.249306 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-config-data-custom\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.249687 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-combined-ca-bundle\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.258077 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.264071 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.269315 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prkkc\" (UniqueName: \"kubernetes.io/projected/f6adb0ef-cb20-4a74-b79b-feb46936d4cd-kube-api-access-prkkc\") pod \"barbican-worker-766567796c-wxh7x\" (UID: \"f6adb0ef-cb20-4a74-b79b-feb46936d4cd\") " pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.333629 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-766567796c-wxh7x" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.344076 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7p95\" (UniqueName: \"kubernetes.io/projected/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-kube-api-access-k7p95\") pod \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.344162 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-etc-machine-id\") pod \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.344254 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-scripts\") pod \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.344312 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8ebaa0ef-dce1-4ff4-a51c-69435ca86699" (UID: "8ebaa0ef-dce1-4ff4-a51c-69435ca86699"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.344425 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-combined-ca-bundle\") pod \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.344571 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-db-sync-config-data\") pod \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.345165 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-config-data\") pod \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\" (UID: \"8ebaa0ef-dce1-4ff4-a51c-69435ca86699\") " Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.345811 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tmwf\" (UniqueName: \"kubernetes.io/projected/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-kube-api-access-5tmwf\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.345856 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.345897 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-logs\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.345925 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.345964 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.345985 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-combined-ca-bundle\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.346004 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dkw2\" (UniqueName: \"kubernetes.io/projected/043658b6-901d-4f17-9242-d7ab1c0cdfaf-kube-api-access-6dkw2\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.346050 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-dns-svc\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.346090 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-config\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.346170 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data-custom\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.346254 5016 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.347669 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-nb\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.348012 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-dns-svc\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.348223 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-logs\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.349085 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-config\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.349110 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-sb\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.351775 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-kube-api-access-k7p95" (OuterVolumeSpecName: "kube-api-access-k7p95") pod "8ebaa0ef-dce1-4ff4-a51c-69435ca86699" (UID: "8ebaa0ef-dce1-4ff4-a51c-69435ca86699"). InnerVolumeSpecName "kube-api-access-k7p95". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.352563 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-combined-ca-bundle\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.353474 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data-custom\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.353943 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8ebaa0ef-dce1-4ff4-a51c-69435ca86699" (UID: "8ebaa0ef-dce1-4ff4-a51c-69435ca86699"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.354613 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-scripts" (OuterVolumeSpecName: "scripts") pod "8ebaa0ef-dce1-4ff4-a51c-69435ca86699" (UID: "8ebaa0ef-dce1-4ff4-a51c-69435ca86699"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.363490 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dkw2\" (UniqueName: \"kubernetes.io/projected/043658b6-901d-4f17-9242-d7ab1c0cdfaf-kube-api-access-6dkw2\") pod \"dnsmasq-dns-6dc8d75dbf-kl8c4\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.366428 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.367217 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tmwf\" (UniqueName: \"kubernetes.io/projected/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-kube-api-access-5tmwf\") pod \"barbican-api-56fd798d48-f9v6h\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.383035 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ebaa0ef-dce1-4ff4-a51c-69435ca86699" (UID: "8ebaa0ef-dce1-4ff4-a51c-69435ca86699"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.406058 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.410851 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-config-data" (OuterVolumeSpecName: "config-data") pod "8ebaa0ef-dce1-4ff4-a51c-69435ca86699" (UID: "8ebaa0ef-dce1-4ff4-a51c-69435ca86699"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.451526 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.451561 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.451572 5016 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.451581 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.451592 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7p95\" (UniqueName: \"kubernetes.io/projected/8ebaa0ef-dce1-4ff4-a51c-69435ca86699-kube-api-access-k7p95\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.567647 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.689454 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6695bc58f4-lkxqb" event={"ID":"c9b80c42-4ba9-4f2d-96d4-b17c97c1b272","Type":"ContainerStarted","Data":"129a658b61bd199657f728f18e362debdf86f927bbce9d66359f8e2c726c8178"} Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.689874 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.690027 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.696854 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xqmfx" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.696894 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xqmfx" event={"ID":"8ebaa0ef-dce1-4ff4-a51c-69435ca86699","Type":"ContainerDied","Data":"0c6780c75657dfff6550f3d876c78c64357afd709f394e482ca5615edb8cf69c"} Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.696934 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c6780c75657dfff6550f3d876c78c64357afd709f394e482ca5615edb8cf69c" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.699799 5016 generic.go:334] "Generic (PLEG): container finished" podID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerID="8b450fb5e54f4d2f541b3afc9309012e0910cc37c7c052b0f658aafdc07be10a" exitCode=0 Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.699828 5016 generic.go:334] "Generic (PLEG): container finished" podID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerID="c68d485ed0a380fbd553c95237d0af36aaccebbef647ed078b59696595f05b16" exitCode=2 Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.699836 5016 generic.go:334] "Generic (PLEG): container finished" podID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerID="60d3ffb0056dcca585f6f4dbcc8321cc96da604e2a4c1ea74fbf53d9ad7cc790" exitCode=0 Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.699859 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerDied","Data":"8b450fb5e54f4d2f541b3afc9309012e0910cc37c7c052b0f658aafdc07be10a"} Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.699883 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerDied","Data":"c68d485ed0a380fbd553c95237d0af36aaccebbef647ed078b59696595f05b16"} Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.699893 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerDied","Data":"60d3ffb0056dcca585f6f4dbcc8321cc96da604e2a4c1ea74fbf53d9ad7cc790"} Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.722464 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6695bc58f4-lkxqb" podStartSLOduration=7.72244074 podStartE2EDuration="7.72244074s" podCreationTimestamp="2025-10-11 07:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:54.711169007 +0000 UTC m=+1002.611624953" watchObservedRunningTime="2025-10-11 07:56:54.72244074 +0000 UTC m=+1002.622896686" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.754537 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7d69bb669b-tzvqn"] Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.892832 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-766567796c-wxh7x"] Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.929352 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 11 07:56:54 crc kubenswrapper[5016]: E1011 07:56:54.929968 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebaa0ef-dce1-4ff4-a51c-69435ca86699" containerName="cinder-db-sync" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.929988 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebaa0ef-dce1-4ff4-a51c-69435ca86699" containerName="cinder-db-sync" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.930319 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebaa0ef-dce1-4ff4-a51c-69435ca86699" containerName="cinder-db-sync" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.931688 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.935330 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.935888 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-b2hmf" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.936251 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.938806 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 11 07:56:54 crc kubenswrapper[5016]: I1011 07:56:54.957108 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:54.998770 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dc8d75dbf-kl8c4"] Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.041752 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc8d75dbf-kl8c4"] Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.059313 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85494b87f-4xhlv"] Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.062632 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.064846 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-scripts\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.064898 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.064961 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.065000 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.065024 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72xpm\" (UniqueName: \"kubernetes.io/projected/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-kube-api-access-72xpm\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.065088 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.089598 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85494b87f-4xhlv"] Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.121719 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56fd798d48-f9v6h"] Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.158255 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.160115 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.161988 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.168889 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-dns-svc\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.168982 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169026 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-config\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169055 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169078 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72xpm\" (UniqueName: \"kubernetes.io/projected/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-kube-api-access-72xpm\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169116 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-sb\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169168 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169199 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-nb\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169224 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-scripts\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169241 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x778h\" (UniqueName: \"kubernetes.io/projected/3384fa61-3001-4106-ac87-67d3e3ca0513-kube-api-access-x778h\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169436 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.169596 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.174180 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.176663 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.177936 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-scripts\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.178495 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.178821 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.193873 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72xpm\" (UniqueName: \"kubernetes.io/projected/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-kube-api-access-72xpm\") pod \"cinder-scheduler-0\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.262078 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271520 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-scripts\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271568 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-config\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271593 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7a0713-78e8-4813-9516-f848d69a5f21-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271611 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271627 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n4pp\" (UniqueName: \"kubernetes.io/projected/5d7a0713-78e8-4813-9516-f848d69a5f21-kube-api-access-7n4pp\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271646 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-sb\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271708 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-nb\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271729 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271746 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x778h\" (UniqueName: \"kubernetes.io/projected/3384fa61-3001-4106-ac87-67d3e3ca0513-kube-api-access-x778h\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271792 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271807 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7a0713-78e8-4813-9516-f848d69a5f21-logs\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.271944 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-dns-svc\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.273142 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-sb\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.273229 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-nb\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.273168 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-config\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.273144 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-dns-svc\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.289928 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x778h\" (UniqueName: \"kubernetes.io/projected/3384fa61-3001-4106-ac87-67d3e3ca0513-kube-api-access-x778h\") pod \"dnsmasq-dns-85494b87f-4xhlv\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.373986 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.374138 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.374167 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7a0713-78e8-4813-9516-f848d69a5f21-logs\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.374800 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-scripts\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.374847 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7a0713-78e8-4813-9516-f848d69a5f21-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.374866 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7a0713-78e8-4813-9516-f848d69a5f21-logs\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.375242 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.375277 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n4pp\" (UniqueName: \"kubernetes.io/projected/5d7a0713-78e8-4813-9516-f848d69a5f21-kube-api-access-7n4pp\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.376005 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7a0713-78e8-4813-9516-f848d69a5f21-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.378487 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data-custom\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.378897 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.379362 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.379412 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-scripts\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.392293 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n4pp\" (UniqueName: \"kubernetes.io/projected/5d7a0713-78e8-4813-9516-f848d69a5f21-kube-api-access-7n4pp\") pod \"cinder-api-0\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.402353 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.484347 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.776017 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-766567796c-wxh7x" event={"ID":"f6adb0ef-cb20-4a74-b79b-feb46936d4cd","Type":"ContainerStarted","Data":"99f85c7d4d0cd04d29d807a7dbc07629758c88528dd3ded8ab259b382ce0143a"} Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.790438 5016 generic.go:334] "Generic (PLEG): container finished" podID="043658b6-901d-4f17-9242-d7ab1c0cdfaf" containerID="3ab5a4a7d2c85c86c1b156b735399b38075d58e4dc82810adb607dfa2c22a587" exitCode=0 Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.790506 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" event={"ID":"043658b6-901d-4f17-9242-d7ab1c0cdfaf","Type":"ContainerDied","Data":"3ab5a4a7d2c85c86c1b156b735399b38075d58e4dc82810adb607dfa2c22a587"} Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.790530 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" event={"ID":"043658b6-901d-4f17-9242-d7ab1c0cdfaf","Type":"ContainerStarted","Data":"56b77c61180cc5da61d3d69acb8581145580150931b8ac162071667cc71f464a"} Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.803017 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.821994 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56fd798d48-f9v6h" event={"ID":"a3f50dd7-7f92-4bc4-a99b-e96dd2929067","Type":"ContainerStarted","Data":"abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07"} Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.822127 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56fd798d48-f9v6h" event={"ID":"a3f50dd7-7f92-4bc4-a99b-e96dd2929067","Type":"ContainerStarted","Data":"5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4"} Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.822206 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.822283 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56fd798d48-f9v6h" event={"ID":"a3f50dd7-7f92-4bc4-a99b-e96dd2929067","Type":"ContainerStarted","Data":"37937a9384cbb726e29043422ba67f0c3cb416e7a9d60ce0d062e1db4f68849c"} Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.822416 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.833100 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" event={"ID":"91b7e91b-be39-4920-9227-a93b91338f97","Type":"ContainerStarted","Data":"270e9810fa40c456836bce53a7447c54541c8645938578ce0dcc9523e565faf0"} Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.855329 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56fd798d48-f9v6h" podStartSLOduration=1.855312378 podStartE2EDuration="1.855312378s" podCreationTimestamp="2025-10-11 07:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:55.85017557 +0000 UTC m=+1003.750631516" watchObservedRunningTime="2025-10-11 07:56:55.855312378 +0000 UTC m=+1003.755768324" Oct 11 07:56:55 crc kubenswrapper[5016]: I1011 07:56:55.992365 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85494b87f-4xhlv"] Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.142045 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 11 07:56:56 crc kubenswrapper[5016]: W1011 07:56:56.424773 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3384fa61_3001_4106_ac87_67d3e3ca0513.slice/crio-845bc38304680e1559e83f75095c3ea81fa01aa2df8dd4cb9b9ccdfebbee27d2 WatchSource:0}: Error finding container 845bc38304680e1559e83f75095c3ea81fa01aa2df8dd4cb9b9ccdfebbee27d2: Status 404 returned error can't find the container with id 845bc38304680e1559e83f75095c3ea81fa01aa2df8dd4cb9b9ccdfebbee27d2 Oct 11 07:56:56 crc kubenswrapper[5016]: W1011 07:56:56.426181 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7381a01_fcca_4cf1_8bd2_394c4ff9eecb.slice/crio-f19dd6cb5234875ced18d9c52672f954f53d57cc5c991f13fe1856f2651efd9b WatchSource:0}: Error finding container f19dd6cb5234875ced18d9c52672f954f53d57cc5c991f13fe1856f2651efd9b: Status 404 returned error can't find the container with id f19dd6cb5234875ced18d9c52672f954f53d57cc5c991f13fe1856f2651efd9b Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.493596 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.615969 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-config\") pod \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.617361 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dkw2\" (UniqueName: \"kubernetes.io/projected/043658b6-901d-4f17-9242-d7ab1c0cdfaf-kube-api-access-6dkw2\") pod \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.617461 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-nb\") pod \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.617484 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-sb\") pod \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.617699 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-dns-svc\") pod \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\" (UID: \"043658b6-901d-4f17-9242-d7ab1c0cdfaf\") " Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.623820 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/043658b6-901d-4f17-9242-d7ab1c0cdfaf-kube-api-access-6dkw2" (OuterVolumeSpecName: "kube-api-access-6dkw2") pod "043658b6-901d-4f17-9242-d7ab1c0cdfaf" (UID: "043658b6-901d-4f17-9242-d7ab1c0cdfaf"). InnerVolumeSpecName "kube-api-access-6dkw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.651258 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-config" (OuterVolumeSpecName: "config") pod "043658b6-901d-4f17-9242-d7ab1c0cdfaf" (UID: "043658b6-901d-4f17-9242-d7ab1c0cdfaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.657505 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "043658b6-901d-4f17-9242-d7ab1c0cdfaf" (UID: "043658b6-901d-4f17-9242-d7ab1c0cdfaf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.679787 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "043658b6-901d-4f17-9242-d7ab1c0cdfaf" (UID: "043658b6-901d-4f17-9242-d7ab1c0cdfaf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.681254 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "043658b6-901d-4f17-9242-d7ab1c0cdfaf" (UID: "043658b6-901d-4f17-9242-d7ab1c0cdfaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.719884 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.719928 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.719941 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dkw2\" (UniqueName: \"kubernetes.io/projected/043658b6-901d-4f17-9242-d7ab1c0cdfaf-kube-api-access-6dkw2\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.719955 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.719967 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/043658b6-901d-4f17-9242-d7ab1c0cdfaf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.846080 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb","Type":"ContainerStarted","Data":"f19dd6cb5234875ced18d9c52672f954f53d57cc5c991f13fe1856f2651efd9b"} Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.846898 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" event={"ID":"3384fa61-3001-4106-ac87-67d3e3ca0513","Type":"ContainerStarted","Data":"845bc38304680e1559e83f75095c3ea81fa01aa2df8dd4cb9b9ccdfebbee27d2"} Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.847920 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" event={"ID":"043658b6-901d-4f17-9242-d7ab1c0cdfaf","Type":"ContainerDied","Data":"56b77c61180cc5da61d3d69acb8581145580150931b8ac162071667cc71f464a"} Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.847949 5016 scope.go:117] "RemoveContainer" containerID="3ab5a4a7d2c85c86c1b156b735399b38075d58e4dc82810adb607dfa2c22a587" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.848054 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dc8d75dbf-kl8c4" Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.861607 5016 generic.go:334] "Generic (PLEG): container finished" podID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerID="0a94574dbcd6be5b2e4251e34cfd6305e007b08fe4785f06b109089774d9900a" exitCode=0 Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.861703 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerDied","Data":"0a94574dbcd6be5b2e4251e34cfd6305e007b08fe4785f06b109089774d9900a"} Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.864248 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d7a0713-78e8-4813-9516-f848d69a5f21","Type":"ContainerStarted","Data":"76372f114b42188a8024dcf1f21a90673abf81627629206180c02b9db68514fa"} Oct 11 07:56:56 crc kubenswrapper[5016]: I1011 07:56:56.971108 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.050004 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dc8d75dbf-kl8c4"] Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.069928 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dc8d75dbf-kl8c4"] Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.128753 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-log-httpd\") pod \"353d22c0-bfdb-4599-a97c-9000eda08e3d\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.128835 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbjv7\" (UniqueName: \"kubernetes.io/projected/353d22c0-bfdb-4599-a97c-9000eda08e3d-kube-api-access-hbjv7\") pod \"353d22c0-bfdb-4599-a97c-9000eda08e3d\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.128952 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-scripts\") pod \"353d22c0-bfdb-4599-a97c-9000eda08e3d\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.128991 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-combined-ca-bundle\") pod \"353d22c0-bfdb-4599-a97c-9000eda08e3d\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.129051 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-sg-core-conf-yaml\") pod \"353d22c0-bfdb-4599-a97c-9000eda08e3d\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.129080 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-config-data\") pod \"353d22c0-bfdb-4599-a97c-9000eda08e3d\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.129108 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-run-httpd\") pod \"353d22c0-bfdb-4599-a97c-9000eda08e3d\" (UID: \"353d22c0-bfdb-4599-a97c-9000eda08e3d\") " Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.129756 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "353d22c0-bfdb-4599-a97c-9000eda08e3d" (UID: "353d22c0-bfdb-4599-a97c-9000eda08e3d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.130416 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "353d22c0-bfdb-4599-a97c-9000eda08e3d" (UID: "353d22c0-bfdb-4599-a97c-9000eda08e3d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.157222 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353d22c0-bfdb-4599-a97c-9000eda08e3d-kube-api-access-hbjv7" (OuterVolumeSpecName: "kube-api-access-hbjv7") pod "353d22c0-bfdb-4599-a97c-9000eda08e3d" (UID: "353d22c0-bfdb-4599-a97c-9000eda08e3d"). InnerVolumeSpecName "kube-api-access-hbjv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.157992 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-scripts" (OuterVolumeSpecName: "scripts") pod "353d22c0-bfdb-4599-a97c-9000eda08e3d" (UID: "353d22c0-bfdb-4599-a97c-9000eda08e3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.195267 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="043658b6-901d-4f17-9242-d7ab1c0cdfaf" path="/var/lib/kubelet/pods/043658b6-901d-4f17-9242-d7ab1c0cdfaf/volumes" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.208500 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "353d22c0-bfdb-4599-a97c-9000eda08e3d" (UID: "353d22c0-bfdb-4599-a97c-9000eda08e3d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.231493 5016 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.231521 5016 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/353d22c0-bfdb-4599-a97c-9000eda08e3d-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.231533 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbjv7\" (UniqueName: \"kubernetes.io/projected/353d22c0-bfdb-4599-a97c-9000eda08e3d-kube-api-access-hbjv7\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.231546 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.231556 5016 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.290586 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "353d22c0-bfdb-4599-a97c-9000eda08e3d" (UID: "353d22c0-bfdb-4599-a97c-9000eda08e3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.321184 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-config-data" (OuterVolumeSpecName: "config-data") pod "353d22c0-bfdb-4599-a97c-9000eda08e3d" (UID: "353d22c0-bfdb-4599-a97c-9000eda08e3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.332851 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.332889 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353d22c0-bfdb-4599-a97c-9000eda08e3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.882226 5016 generic.go:334] "Generic (PLEG): container finished" podID="3384fa61-3001-4106-ac87-67d3e3ca0513" containerID="b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f" exitCode=0 Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.882345 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" event={"ID":"3384fa61-3001-4106-ac87-67d3e3ca0513","Type":"ContainerDied","Data":"b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f"} Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.893046 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"353d22c0-bfdb-4599-a97c-9000eda08e3d","Type":"ContainerDied","Data":"2be896df43fc723fb43eea0f267bd5b5734d2712aed2bedcd96431d28354fd0a"} Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.893095 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.893474 5016 scope.go:117] "RemoveContainer" containerID="8b450fb5e54f4d2f541b3afc9309012e0910cc37c7c052b0f658aafdc07be10a" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.896529 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d7a0713-78e8-4813-9516-f848d69a5f21","Type":"ContainerStarted","Data":"41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9"} Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.902996 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" event={"ID":"91b7e91b-be39-4920-9227-a93b91338f97","Type":"ContainerStarted","Data":"0d83415d191ead92a44853d306737a190ba2931a001faacb17bd7973bfa1ea70"} Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.903047 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" event={"ID":"91b7e91b-be39-4920-9227-a93b91338f97","Type":"ContainerStarted","Data":"c8f2ba302ff8a4bc1ddf44d6c9f3b6cfa30592f3d3091bdf77dedba501835f98"} Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.917571 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-766567796c-wxh7x" event={"ID":"f6adb0ef-cb20-4a74-b79b-feb46936d4cd","Type":"ContainerStarted","Data":"7986b941a313b56b5d1beb2c02a3a80ec741610765a633c67b49d5fd3e76eb31"} Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.917640 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-766567796c-wxh7x" event={"ID":"f6adb0ef-cb20-4a74-b79b-feb46936d4cd","Type":"ContainerStarted","Data":"65c1252e22994f781abc17950adea74e9ff7d7b331ed757df4f046b4c87c0ee2"} Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.935918 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7d69bb669b-tzvqn" podStartSLOduration=2.748318281 podStartE2EDuration="4.935897665s" podCreationTimestamp="2025-10-11 07:56:53 +0000 UTC" firstStartedPulling="2025-10-11 07:56:54.765038135 +0000 UTC m=+1002.665494081" lastFinishedPulling="2025-10-11 07:56:56.952617519 +0000 UTC m=+1004.853073465" observedRunningTime="2025-10-11 07:56:57.931910548 +0000 UTC m=+1005.832366484" watchObservedRunningTime="2025-10-11 07:56:57.935897665 +0000 UTC m=+1005.836353631" Oct 11 07:56:57 crc kubenswrapper[5016]: I1011 07:56:57.980843 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-766567796c-wxh7x" podStartSLOduration=2.926588475 podStartE2EDuration="4.980813003s" podCreationTimestamp="2025-10-11 07:56:53 +0000 UTC" firstStartedPulling="2025-10-11 07:56:54.898244097 +0000 UTC m=+1002.798700043" lastFinishedPulling="2025-10-11 07:56:56.952468615 +0000 UTC m=+1004.852924571" observedRunningTime="2025-10-11 07:56:57.962015197 +0000 UTC m=+1005.862471143" watchObservedRunningTime="2025-10-11 07:56:57.980813003 +0000 UTC m=+1005.881268989" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.020597 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.029144 5016 scope.go:117] "RemoveContainer" containerID="c68d485ed0a380fbd553c95237d0af36aaccebbef647ed078b59696595f05b16" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.030197 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.060860 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:56:58 crc kubenswrapper[5016]: E1011 07:56:58.061228 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="ceilometer-central-agent" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061249 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="ceilometer-central-agent" Oct 11 07:56:58 crc kubenswrapper[5016]: E1011 07:56:58.061268 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="043658b6-901d-4f17-9242-d7ab1c0cdfaf" containerName="init" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061278 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="043658b6-901d-4f17-9242-d7ab1c0cdfaf" containerName="init" Oct 11 07:56:58 crc kubenswrapper[5016]: E1011 07:56:58.061292 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="proxy-httpd" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061301 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="proxy-httpd" Oct 11 07:56:58 crc kubenswrapper[5016]: E1011 07:56:58.061320 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="sg-core" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061328 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="sg-core" Oct 11 07:56:58 crc kubenswrapper[5016]: E1011 07:56:58.061340 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="ceilometer-notification-agent" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061346 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="ceilometer-notification-agent" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061544 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="ceilometer-notification-agent" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061557 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="043658b6-901d-4f17-9242-d7ab1c0cdfaf" containerName="init" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061566 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="proxy-httpd" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061579 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="ceilometer-central-agent" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.061589 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" containerName="sg-core" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.063156 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.071105 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.071900 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.080382 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.117153 5016 scope.go:117] "RemoveContainer" containerID="0a94574dbcd6be5b2e4251e34cfd6305e007b08fe4785f06b109089774d9900a" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.151775 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.151869 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-config-data\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.151920 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-scripts\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.151943 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-run-httpd\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.151969 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.151986 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwdxl\" (UniqueName: \"kubernetes.io/projected/0bc0b78e-f920-4af6-901f-ef0d92d9b046-kube-api-access-dwdxl\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.152050 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-log-httpd\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.158874 5016 scope.go:117] "RemoveContainer" containerID="60d3ffb0056dcca585f6f4dbcc8321cc96da604e2a4c1ea74fbf53d9ad7cc790" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.253400 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-config-data\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.253501 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-scripts\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.253522 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-run-httpd\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.253548 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.253565 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwdxl\" (UniqueName: \"kubernetes.io/projected/0bc0b78e-f920-4af6-901f-ef0d92d9b046-kube-api-access-dwdxl\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.253640 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-log-httpd\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.253692 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.254364 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-run-httpd\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.254875 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-log-httpd\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.258590 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-config-data\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.259299 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.259474 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-scripts\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.261698 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.274018 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwdxl\" (UniqueName: \"kubernetes.io/projected/0bc0b78e-f920-4af6-901f-ef0d92d9b046-kube-api-access-dwdxl\") pod \"ceilometer-0\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.427696 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.789975 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.897241 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:56:58 crc kubenswrapper[5016]: W1011 07:56:58.909145 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bc0b78e_f920_4af6_901f_ef0d92d9b046.slice/crio-59feade632ebbe148127c3a76ab34526111b9d639307c40180d4aaa3dcd126d4 WatchSource:0}: Error finding container 59feade632ebbe148127c3a76ab34526111b9d639307c40180d4aaa3dcd126d4: Status 404 returned error can't find the container with id 59feade632ebbe148127c3a76ab34526111b9d639307c40180d4aaa3dcd126d4 Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.935232 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d7a0713-78e8-4813-9516-f848d69a5f21","Type":"ContainerStarted","Data":"7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0"} Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.935338 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.942709 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb","Type":"ContainerStarted","Data":"5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4"} Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.942763 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb","Type":"ContainerStarted","Data":"0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610"} Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.951350 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" event={"ID":"3384fa61-3001-4106-ac87-67d3e3ca0513","Type":"ContainerStarted","Data":"86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13"} Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.952305 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.954675 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.954643684 podStartE2EDuration="3.954643684s" podCreationTimestamp="2025-10-11 07:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:58.952701332 +0000 UTC m=+1006.853157278" watchObservedRunningTime="2025-10-11 07:56:58.954643684 +0000 UTC m=+1006.855099630" Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.957521 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerStarted","Data":"59feade632ebbe148127c3a76ab34526111b9d639307c40180d4aaa3dcd126d4"} Oct 11 07:56:58 crc kubenswrapper[5016]: I1011 07:56:58.982563 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.138125704 podStartE2EDuration="4.982547164s" podCreationTimestamp="2025-10-11 07:56:54 +0000 UTC" firstStartedPulling="2025-10-11 07:56:56.435834231 +0000 UTC m=+1004.336290177" lastFinishedPulling="2025-10-11 07:56:57.280255701 +0000 UTC m=+1005.180711637" observedRunningTime="2025-10-11 07:56:58.975756461 +0000 UTC m=+1006.876212417" watchObservedRunningTime="2025-10-11 07:56:58.982547164 +0000 UTC m=+1006.883003110" Oct 11 07:56:59 crc kubenswrapper[5016]: I1011 07:56:59.005181 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" podStartSLOduration=4.005157732 podStartE2EDuration="4.005157732s" podCreationTimestamp="2025-10-11 07:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:56:58.99356005 +0000 UTC m=+1006.894015996" watchObservedRunningTime="2025-10-11 07:56:59.005157732 +0000 UTC m=+1006.905613688" Oct 11 07:56:59 crc kubenswrapper[5016]: I1011 07:56:59.144404 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353d22c0-bfdb-4599-a97c-9000eda08e3d" path="/var/lib/kubelet/pods/353d22c0-bfdb-4599-a97c-9000eda08e3d/volumes" Oct 11 07:56:59 crc kubenswrapper[5016]: I1011 07:56:59.968410 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerStarted","Data":"34c46232123836b5dbe77b0f02f0c79445c6fc10cca762fd120cd8718bd18cd0"} Oct 11 07:56:59 crc kubenswrapper[5016]: I1011 07:56:59.968576 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerName="cinder-api-log" containerID="cri-o://41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9" gracePeriod=30 Oct 11 07:56:59 crc kubenswrapper[5016]: I1011 07:56:59.969408 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerName="cinder-api" containerID="cri-o://7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0" gracePeriod=30 Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.262580 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.333641 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d89d796fd-cgg68"] Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.335403 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.339565 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.340219 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.354213 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d89d796fd-cgg68"] Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.396160 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-internal-tls-certs\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.396852 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-config-data\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.397034 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-public-tls-certs\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.397161 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-combined-ca-bundle\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.397275 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-config-data-custom\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.397390 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a479bd-8198-4dec-a682-6864aaaca48b-logs\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.397492 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb8nz\" (UniqueName: \"kubernetes.io/projected/06a479bd-8198-4dec-a682-6864aaaca48b-kube-api-access-hb8nz\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.499026 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-public-tls-certs\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.499073 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-combined-ca-bundle\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.499136 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-config-data-custom\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.499159 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a479bd-8198-4dec-a682-6864aaaca48b-logs\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.499206 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb8nz\" (UniqueName: \"kubernetes.io/projected/06a479bd-8198-4dec-a682-6864aaaca48b-kube-api-access-hb8nz\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.499242 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-internal-tls-certs\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.499280 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-config-data\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.500595 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a479bd-8198-4dec-a682-6864aaaca48b-logs\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.506278 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-config-data-custom\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.506926 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-internal-tls-certs\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.507715 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-config-data\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.508598 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-combined-ca-bundle\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.508795 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06a479bd-8198-4dec-a682-6864aaaca48b-public-tls-certs\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.520199 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb8nz\" (UniqueName: \"kubernetes.io/projected/06a479bd-8198-4dec-a682-6864aaaca48b-kube-api-access-hb8nz\") pod \"barbican-api-d89d796fd-cgg68\" (UID: \"06a479bd-8198-4dec-a682-6864aaaca48b\") " pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.603217 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.688071 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.702252 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data\") pod \"5d7a0713-78e8-4813-9516-f848d69a5f21\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.702385 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-scripts\") pod \"5d7a0713-78e8-4813-9516-f848d69a5f21\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.702426 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n4pp\" (UniqueName: \"kubernetes.io/projected/5d7a0713-78e8-4813-9516-f848d69a5f21-kube-api-access-7n4pp\") pod \"5d7a0713-78e8-4813-9516-f848d69a5f21\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.702467 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-combined-ca-bundle\") pod \"5d7a0713-78e8-4813-9516-f848d69a5f21\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.702489 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data-custom\") pod \"5d7a0713-78e8-4813-9516-f848d69a5f21\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.702542 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7a0713-78e8-4813-9516-f848d69a5f21-logs\") pod \"5d7a0713-78e8-4813-9516-f848d69a5f21\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.702627 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7a0713-78e8-4813-9516-f848d69a5f21-etc-machine-id\") pod \"5d7a0713-78e8-4813-9516-f848d69a5f21\" (UID: \"5d7a0713-78e8-4813-9516-f848d69a5f21\") " Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.703134 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d7a0713-78e8-4813-9516-f848d69a5f21-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5d7a0713-78e8-4813-9516-f848d69a5f21" (UID: "5d7a0713-78e8-4813-9516-f848d69a5f21"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.710606 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d7a0713-78e8-4813-9516-f848d69a5f21-logs" (OuterVolumeSpecName: "logs") pod "5d7a0713-78e8-4813-9516-f848d69a5f21" (UID: "5d7a0713-78e8-4813-9516-f848d69a5f21"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.713929 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5d7a0713-78e8-4813-9516-f848d69a5f21" (UID: "5d7a0713-78e8-4813-9516-f848d69a5f21"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.714959 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-scripts" (OuterVolumeSpecName: "scripts") pod "5d7a0713-78e8-4813-9516-f848d69a5f21" (UID: "5d7a0713-78e8-4813-9516-f848d69a5f21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.729963 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d7a0713-78e8-4813-9516-f848d69a5f21-kube-api-access-7n4pp" (OuterVolumeSpecName: "kube-api-access-7n4pp") pod "5d7a0713-78e8-4813-9516-f848d69a5f21" (UID: "5d7a0713-78e8-4813-9516-f848d69a5f21"). InnerVolumeSpecName "kube-api-access-7n4pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.764701 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d7a0713-78e8-4813-9516-f848d69a5f21" (UID: "5d7a0713-78e8-4813-9516-f848d69a5f21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.782916 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data" (OuterVolumeSpecName: "config-data") pod "5d7a0713-78e8-4813-9516-f848d69a5f21" (UID: "5d7a0713-78e8-4813-9516-f848d69a5f21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.806308 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5d7a0713-78e8-4813-9516-f848d69a5f21-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.806367 5016 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d7a0713-78e8-4813-9516-f848d69a5f21-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.806392 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.806411 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.806432 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n4pp\" (UniqueName: \"kubernetes.io/projected/5d7a0713-78e8-4813-9516-f848d69a5f21-kube-api-access-7n4pp\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.806453 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.806470 5016 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d7a0713-78e8-4813-9516-f848d69a5f21-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.993212 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerStarted","Data":"20f54aed6e9195da8cb6b9968a4eeb0add55f89bbbf7626986a8b8a31b17f2c7"} Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.997448 5016 generic.go:334] "Generic (PLEG): container finished" podID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerID="7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0" exitCode=0 Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.997497 5016 generic.go:334] "Generic (PLEG): container finished" podID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerID="41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9" exitCode=143 Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.998690 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d7a0713-78e8-4813-9516-f848d69a5f21","Type":"ContainerDied","Data":"7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0"} Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.998730 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d7a0713-78e8-4813-9516-f848d69a5f21","Type":"ContainerDied","Data":"41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9"} Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.998748 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5d7a0713-78e8-4813-9516-f848d69a5f21","Type":"ContainerDied","Data":"76372f114b42188a8024dcf1f21a90673abf81627629206180c02b9db68514fa"} Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.998765 5016 scope.go:117] "RemoveContainer" containerID="7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0" Oct 11 07:57:00 crc kubenswrapper[5016]: I1011 07:57:00.998809 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.028451 5016 scope.go:117] "RemoveContainer" containerID="41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.129993 5016 scope.go:117] "RemoveContainer" containerID="7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0" Oct 11 07:57:01 crc kubenswrapper[5016]: E1011 07:57:01.146258 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0\": container with ID starting with 7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0 not found: ID does not exist" containerID="7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.146330 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0"} err="failed to get container status \"7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0\": rpc error: code = NotFound desc = could not find container \"7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0\": container with ID starting with 7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0 not found: ID does not exist" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.146369 5016 scope.go:117] "RemoveContainer" containerID="41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9" Oct 11 07:57:01 crc kubenswrapper[5016]: E1011 07:57:01.146700 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9\": container with ID starting with 41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9 not found: ID does not exist" containerID="41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.146717 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9"} err="failed to get container status \"41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9\": rpc error: code = NotFound desc = could not find container \"41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9\": container with ID starting with 41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9 not found: ID does not exist" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.146730 5016 scope.go:117] "RemoveContainer" containerID="7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.151825 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0"} err="failed to get container status \"7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0\": rpc error: code = NotFound desc = could not find container \"7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0\": container with ID starting with 7612e47fb6091a321d5968ce1de2b3eb0a6c2077efd8c428a051c27dad3c17d0 not found: ID does not exist" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.151854 5016 scope.go:117] "RemoveContainer" containerID="41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.161902 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9"} err="failed to get container status \"41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9\": rpc error: code = NotFound desc = could not find container \"41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9\": container with ID starting with 41faf815f6ee07a18b58ff5c1feb54ad43745bcd698d6aab4a846800afbe3af9 not found: ID does not exist" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.190433 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.190498 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.198412 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 11 07:57:01 crc kubenswrapper[5016]: E1011 07:57:01.199080 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerName="cinder-api" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.199168 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerName="cinder-api" Oct 11 07:57:01 crc kubenswrapper[5016]: E1011 07:57:01.199226 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerName="cinder-api-log" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.199274 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerName="cinder-api-log" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.199515 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerName="cinder-api-log" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.199582 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d7a0713-78e8-4813-9516-f848d69a5f21" containerName="cinder-api" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.200789 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.211024 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.211237 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.216401 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.220612 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.232107 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-config-data\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.232187 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.232222 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.232275 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82c29c3e-ac31-4662-9577-ebed98af9dbb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.232300 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-scripts\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.232332 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.232357 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2nf\" (UniqueName: \"kubernetes.io/projected/82c29c3e-ac31-4662-9577-ebed98af9dbb-kube-api-access-5r2nf\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.232425 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-config-data-custom\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.232447 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c29c3e-ac31-4662-9577-ebed98af9dbb-logs\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.269543 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d89d796fd-cgg68"] Oct 11 07:57:01 crc kubenswrapper[5016]: W1011 07:57:01.276861 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06a479bd_8198_4dec_a682_6864aaaca48b.slice/crio-726d43a6603970bb3b733ac6b3f671944cb85a815853159f0441fcf465d6c63c WatchSource:0}: Error finding container 726d43a6603970bb3b733ac6b3f671944cb85a815853159f0441fcf465d6c63c: Status 404 returned error can't find the container with id 726d43a6603970bb3b733ac6b3f671944cb85a815853159f0441fcf465d6c63c Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.333792 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.333842 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.333882 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82c29c3e-ac31-4662-9577-ebed98af9dbb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.333899 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-scripts\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.333924 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.333945 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r2nf\" (UniqueName: \"kubernetes.io/projected/82c29c3e-ac31-4662-9577-ebed98af9dbb-kube-api-access-5r2nf\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.334005 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-config-data-custom\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.334026 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c29c3e-ac31-4662-9577-ebed98af9dbb-logs\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.334062 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-config-data\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.335135 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c29c3e-ac31-4662-9577-ebed98af9dbb-logs\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.335975 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/82c29c3e-ac31-4662-9577-ebed98af9dbb-etc-machine-id\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.340852 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.343577 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-scripts\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.343595 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.346156 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-public-tls-certs\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.346578 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-config-data-custom\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.347386 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c29c3e-ac31-4662-9577-ebed98af9dbb-config-data\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.351006 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r2nf\" (UniqueName: \"kubernetes.io/projected/82c29c3e-ac31-4662-9577-ebed98af9dbb-kube-api-access-5r2nf\") pod \"cinder-api-0\" (UID: \"82c29c3e-ac31-4662-9577-ebed98af9dbb\") " pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.368182 5016 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podcf835037-4e3d-4b3f-80bc-7629cfd8da5c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podcf835037-4e3d-4b3f-80bc-7629cfd8da5c] : Timed out while waiting for systemd to remove kubepods-besteffort-podcf835037_4e3d_4b3f_80bc_7629cfd8da5c.slice" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.438598 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 11 07:57:01 crc kubenswrapper[5016]: I1011 07:57:01.895384 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 11 07:57:01 crc kubenswrapper[5016]: W1011 07:57:01.898634 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c29c3e_ac31_4662_9577_ebed98af9dbb.slice/crio-e0d397676e21337b4d982f3ab59f7327654efa446962a8a60d4dd96d6875be38 WatchSource:0}: Error finding container e0d397676e21337b4d982f3ab59f7327654efa446962a8a60d4dd96d6875be38: Status 404 returned error can't find the container with id e0d397676e21337b4d982f3ab59f7327654efa446962a8a60d4dd96d6875be38 Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.017194 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82c29c3e-ac31-4662-9577-ebed98af9dbb","Type":"ContainerStarted","Data":"e0d397676e21337b4d982f3ab59f7327654efa446962a8a60d4dd96d6875be38"} Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.019581 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerStarted","Data":"3df9b73419a05c014c37a7fbdf074076fd8a87ff3695a3c2382769cf7b713e05"} Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.022329 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d89d796fd-cgg68" event={"ID":"06a479bd-8198-4dec-a682-6864aaaca48b","Type":"ContainerStarted","Data":"6122648ad474133027617b91461b5892a50b73432612ca14653be5fa81fe30d7"} Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.022378 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d89d796fd-cgg68" event={"ID":"06a479bd-8198-4dec-a682-6864aaaca48b","Type":"ContainerStarted","Data":"c6e903874395367e98769e2067a02e7736bbcc912a6725a855fff6020860f7b3"} Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.022392 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d89d796fd-cgg68" event={"ID":"06a479bd-8198-4dec-a682-6864aaaca48b","Type":"ContainerStarted","Data":"726d43a6603970bb3b733ac6b3f671944cb85a815853159f0441fcf465d6c63c"} Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.022487 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.022523 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.057932 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d89d796fd-cgg68" podStartSLOduration=2.057913467 podStartE2EDuration="2.057913467s" podCreationTimestamp="2025-10-11 07:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:57:02.047873937 +0000 UTC m=+1009.948329883" watchObservedRunningTime="2025-10-11 07:57:02.057913467 +0000 UTC m=+1009.958369413" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.698874 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.762724 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4acf875d-ca40-47ff-a2e9-cdf09c447232-logs\") pod \"4acf875d-ca40-47ff-a2e9-cdf09c447232\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.762863 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4acf875d-ca40-47ff-a2e9-cdf09c447232-horizon-secret-key\") pod \"4acf875d-ca40-47ff-a2e9-cdf09c447232\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.762910 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7zhw\" (UniqueName: \"kubernetes.io/projected/4acf875d-ca40-47ff-a2e9-cdf09c447232-kube-api-access-g7zhw\") pod \"4acf875d-ca40-47ff-a2e9-cdf09c447232\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.762987 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-scripts\") pod \"4acf875d-ca40-47ff-a2e9-cdf09c447232\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.763037 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-config-data\") pod \"4acf875d-ca40-47ff-a2e9-cdf09c447232\" (UID: \"4acf875d-ca40-47ff-a2e9-cdf09c447232\") " Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.763457 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4acf875d-ca40-47ff-a2e9-cdf09c447232-logs" (OuterVolumeSpecName: "logs") pod "4acf875d-ca40-47ff-a2e9-cdf09c447232" (UID: "4acf875d-ca40-47ff-a2e9-cdf09c447232"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.764204 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4acf875d-ca40-47ff-a2e9-cdf09c447232-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.777358 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4acf875d-ca40-47ff-a2e9-cdf09c447232-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4acf875d-ca40-47ff-a2e9-cdf09c447232" (UID: "4acf875d-ca40-47ff-a2e9-cdf09c447232"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.778297 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4acf875d-ca40-47ff-a2e9-cdf09c447232-kube-api-access-g7zhw" (OuterVolumeSpecName: "kube-api-access-g7zhw") pod "4acf875d-ca40-47ff-a2e9-cdf09c447232" (UID: "4acf875d-ca40-47ff-a2e9-cdf09c447232"). InnerVolumeSpecName "kube-api-access-g7zhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.793366 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-config-data" (OuterVolumeSpecName: "config-data") pod "4acf875d-ca40-47ff-a2e9-cdf09c447232" (UID: "4acf875d-ca40-47ff-a2e9-cdf09c447232"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.805590 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-scripts" (OuterVolumeSpecName: "scripts") pod "4acf875d-ca40-47ff-a2e9-cdf09c447232" (UID: "4acf875d-ca40-47ff-a2e9-cdf09c447232"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.867130 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.867431 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4acf875d-ca40-47ff-a2e9-cdf09c447232-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.867446 5016 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4acf875d-ca40-47ff-a2e9-cdf09c447232-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:02 crc kubenswrapper[5016]: I1011 07:57:02.867459 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7zhw\" (UniqueName: \"kubernetes.io/projected/4acf875d-ca40-47ff-a2e9-cdf09c447232-kube-api-access-g7zhw\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.032004 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82c29c3e-ac31-4662-9577-ebed98af9dbb","Type":"ContainerStarted","Data":"53a97fe81fb9bcb66cea740b9fc006a2ba3ee7e2420eeba9b838729880e362b3"} Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.034879 5016 generic.go:334] "Generic (PLEG): container finished" podID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerID="a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf" exitCode=137 Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.034913 5016 generic.go:334] "Generic (PLEG): container finished" podID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerID="5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7" exitCode=137 Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.035027 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6679b846c9-4jxzp" event={"ID":"4acf875d-ca40-47ff-a2e9-cdf09c447232","Type":"ContainerDied","Data":"a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf"} Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.035060 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6679b846c9-4jxzp" event={"ID":"4acf875d-ca40-47ff-a2e9-cdf09c447232","Type":"ContainerDied","Data":"5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7"} Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.035092 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6679b846c9-4jxzp" event={"ID":"4acf875d-ca40-47ff-a2e9-cdf09c447232","Type":"ContainerDied","Data":"a7682a86a7d767e73eb1d0f63ca927f0fc82b8a33f9fca28fb269d8c33dac32e"} Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.035094 5016 scope.go:117] "RemoveContainer" containerID="a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.035160 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6679b846c9-4jxzp" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.038742 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerStarted","Data":"5851a90d93e6ab43b92d30a327cf39693067d04f573744cd4c4df7df7e24b86e"} Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.039480 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.090867 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.294248297 podStartE2EDuration="5.079138582s" podCreationTimestamp="2025-10-11 07:56:58 +0000 UTC" firstStartedPulling="2025-10-11 07:56:58.912011347 +0000 UTC m=+1006.812467293" lastFinishedPulling="2025-10-11 07:57:02.696901642 +0000 UTC m=+1010.597357578" observedRunningTime="2025-10-11 07:57:03.064850688 +0000 UTC m=+1010.965306634" watchObservedRunningTime="2025-10-11 07:57:03.079138582 +0000 UTC m=+1010.979594528" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.103178 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6679b846c9-4jxzp"] Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.112568 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6679b846c9-4jxzp"] Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.153084 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4acf875d-ca40-47ff-a2e9-cdf09c447232" path="/var/lib/kubelet/pods/4acf875d-ca40-47ff-a2e9-cdf09c447232/volumes" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.153719 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d7a0713-78e8-4813-9516-f848d69a5f21" path="/var/lib/kubelet/pods/5d7a0713-78e8-4813-9516-f848d69a5f21/volumes" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.242474 5016 scope.go:117] "RemoveContainer" containerID="5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.267542 5016 scope.go:117] "RemoveContainer" containerID="a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf" Oct 11 07:57:03 crc kubenswrapper[5016]: E1011 07:57:03.268282 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf\": container with ID starting with a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf not found: ID does not exist" containerID="a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.268323 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf"} err="failed to get container status \"a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf\": rpc error: code = NotFound desc = could not find container \"a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf\": container with ID starting with a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf not found: ID does not exist" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.268344 5016 scope.go:117] "RemoveContainer" containerID="5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7" Oct 11 07:57:03 crc kubenswrapper[5016]: E1011 07:57:03.268885 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7\": container with ID starting with 5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7 not found: ID does not exist" containerID="5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.268909 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7"} err="failed to get container status \"5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7\": rpc error: code = NotFound desc = could not find container \"5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7\": container with ID starting with 5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7 not found: ID does not exist" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.268930 5016 scope.go:117] "RemoveContainer" containerID="a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.269401 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf"} err="failed to get container status \"a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf\": rpc error: code = NotFound desc = could not find container \"a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf\": container with ID starting with a3d2d19a0256e30a37d14fdf3f91c36918a32758082be313bc1a5b4b3eb95faf not found: ID does not exist" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.269554 5016 scope.go:117] "RemoveContainer" containerID="5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.270041 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7"} err="failed to get container status \"5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7\": rpc error: code = NotFound desc = could not find container \"5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7\": container with ID starting with 5d72718116db8e8695ba990f7751d940018b593fdc6a0ceb16b21dee3f0ae9d7 not found: ID does not exist" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.780283 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-df49866-g5nkl" Oct 11 07:57:03 crc kubenswrapper[5016]: I1011 07:57:03.864088 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:57:04 crc kubenswrapper[5016]: I1011 07:57:04.056686 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"82c29c3e-ac31-4662-9577-ebed98af9dbb","Type":"ContainerStarted","Data":"a7284af7ec2003021651e5d72f64e075bfefb028b3a65c44e59eeec5dcf1584d"} Oct 11 07:57:04 crc kubenswrapper[5016]: I1011 07:57:04.057469 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 11 07:57:04 crc kubenswrapper[5016]: I1011 07:57:04.081197 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.081177212 podStartE2EDuration="3.081177212s" podCreationTimestamp="2025-10-11 07:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:57:04.073488335 +0000 UTC m=+1011.973944291" watchObservedRunningTime="2025-10-11 07:57:04.081177212 +0000 UTC m=+1011.981633158" Oct 11 07:57:04 crc kubenswrapper[5016]: I1011 07:57:04.249694 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:57:05 crc kubenswrapper[5016]: I1011 07:57:05.405055 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:57:05 crc kubenswrapper[5016]: I1011 07:57:05.479471 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7d8dc7ff-cxklp"] Oct 11 07:57:05 crc kubenswrapper[5016]: I1011 07:57:05.479875 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" podUID="94fe6cdc-7b3c-4643-bc8a-664a9c12590e" containerName="dnsmasq-dns" containerID="cri-o://452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81" gracePeriod=10 Oct 11 07:57:05 crc kubenswrapper[5016]: I1011 07:57:05.519525 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 11 07:57:05 crc kubenswrapper[5016]: I1011 07:57:05.574226 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 11 07:57:05 crc kubenswrapper[5016]: I1011 07:57:05.875039 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-df49866-g5nkl" Oct 11 07:57:05 crc kubenswrapper[5016]: I1011 07:57:05.959261 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:57:05 crc kubenswrapper[5016]: I1011 07:57:05.980722 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.030497 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-config\") pod \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.030696 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kvrn\" (UniqueName: \"kubernetes.io/projected/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-kube-api-access-4kvrn\") pod \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.030795 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-dns-svc\") pod \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.030907 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-sb\") pod \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.031009 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-nb\") pod \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\" (UID: \"94fe6cdc-7b3c-4643-bc8a-664a9c12590e\") " Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.063405 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-kube-api-access-4kvrn" (OuterVolumeSpecName: "kube-api-access-4kvrn") pod "94fe6cdc-7b3c-4643-bc8a-664a9c12590e" (UID: "94fe6cdc-7b3c-4643-bc8a-664a9c12590e"). InnerVolumeSpecName "kube-api-access-4kvrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.074782 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-65987df486-lvrh6" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.091871 5016 generic.go:334] "Generic (PLEG): container finished" podID="94fe6cdc-7b3c-4643-bc8a-664a9c12590e" containerID="452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81" exitCode=0 Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.091979 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.092076 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" event={"ID":"94fe6cdc-7b3c-4643-bc8a-664a9c12590e","Type":"ContainerDied","Data":"452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81"} Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.092109 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7d8dc7ff-cxklp" event={"ID":"94fe6cdc-7b3c-4643-bc8a-664a9c12590e","Type":"ContainerDied","Data":"f45197384b64137989d8e2a25c7f1a5b885afe007a1abd74382ba9498fe7e7f8"} Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.092127 5016 scope.go:117] "RemoveContainer" containerID="452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.092400 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerName="cinder-scheduler" containerID="cri-o://0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610" gracePeriod=30 Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.092513 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerName="probe" containerID="cri-o://5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4" gracePeriod=30 Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.113741 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-config" (OuterVolumeSpecName: "config") pod "94fe6cdc-7b3c-4643-bc8a-664a9c12590e" (UID: "94fe6cdc-7b3c-4643-bc8a-664a9c12590e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.131437 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94fe6cdc-7b3c-4643-bc8a-664a9c12590e" (UID: "94fe6cdc-7b3c-4643-bc8a-664a9c12590e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.135744 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-df49866-g5nkl"] Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.135930 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-df49866-g5nkl" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon-log" containerID="cri-o://88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf" gracePeriod=30 Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.135949 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.135989 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.136030 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kvrn\" (UniqueName: \"kubernetes.io/projected/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-kube-api-access-4kvrn\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.136313 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-df49866-g5nkl" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon" containerID="cri-o://0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6" gracePeriod=30 Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.145451 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94fe6cdc-7b3c-4643-bc8a-664a9c12590e" (UID: "94fe6cdc-7b3c-4643-bc8a-664a9c12590e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.165877 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94fe6cdc-7b3c-4643-bc8a-664a9c12590e" (UID: "94fe6cdc-7b3c-4643-bc8a-664a9c12590e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.201356 5016 scope.go:117] "RemoveContainer" containerID="662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.225232 5016 scope.go:117] "RemoveContainer" containerID="452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81" Oct 11 07:57:06 crc kubenswrapper[5016]: E1011 07:57:06.225736 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81\": container with ID starting with 452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81 not found: ID does not exist" containerID="452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.225797 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81"} err="failed to get container status \"452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81\": rpc error: code = NotFound desc = could not find container \"452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81\": container with ID starting with 452437cc725c10d85b30318a843747598930ea2a045b388c91218e10420c0e81 not found: ID does not exist" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.225824 5016 scope.go:117] "RemoveContainer" containerID="662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884" Oct 11 07:57:06 crc kubenswrapper[5016]: E1011 07:57:06.226154 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884\": container with ID starting with 662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884 not found: ID does not exist" containerID="662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.226187 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884"} err="failed to get container status \"662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884\": rpc error: code = NotFound desc = could not find container \"662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884\": container with ID starting with 662558b3e8f1aadabf84dad245794e1b311b36f52504bc3be5c31b4384838884 not found: ID does not exist" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.238463 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.238510 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94fe6cdc-7b3c-4643-bc8a-664a9c12590e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.359245 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.451608 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7d8dc7ff-cxklp"] Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.460558 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f7d8dc7ff-cxklp"] Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.687834 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c8b64649f-69xkr" Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.767179 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59dc8c6b68-jd4p4"] Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.767428 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59dc8c6b68-jd4p4" podUID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerName="neutron-api" containerID="cri-o://19278b670332c5311619044a69cb05747fcd4e340cdfbd1e589100cf0a1e7323" gracePeriod=30 Oct 11 07:57:06 crc kubenswrapper[5016]: I1011 07:57:06.767866 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59dc8c6b68-jd4p4" podUID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerName="neutron-httpd" containerID="cri-o://0ff4496cc41b82abf0c84e553e0ad482d2a2d59dcbf267abe776975eb46c7085" gracePeriod=30 Oct 11 07:57:07 crc kubenswrapper[5016]: I1011 07:57:07.103438 5016 generic.go:334] "Generic (PLEG): container finished" podID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerID="5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4" exitCode=0 Oct 11 07:57:07 crc kubenswrapper[5016]: I1011 07:57:07.103473 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb","Type":"ContainerDied","Data":"5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4"} Oct 11 07:57:07 crc kubenswrapper[5016]: I1011 07:57:07.107009 5016 generic.go:334] "Generic (PLEG): container finished" podID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerID="0ff4496cc41b82abf0c84e553e0ad482d2a2d59dcbf267abe776975eb46c7085" exitCode=0 Oct 11 07:57:07 crc kubenswrapper[5016]: I1011 07:57:07.107056 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dc8c6b68-jd4p4" event={"ID":"4f6cfa91-27d5-4f55-ba9f-9d61367584ca","Type":"ContainerDied","Data":"0ff4496cc41b82abf0c84e553e0ad482d2a2d59dcbf267abe776975eb46c7085"} Oct 11 07:57:07 crc kubenswrapper[5016]: I1011 07:57:07.144344 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94fe6cdc-7b3c-4643-bc8a-664a9c12590e" path="/var/lib/kubelet/pods/94fe6cdc-7b3c-4643-bc8a-664a9c12590e/volumes" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.635838 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.714570 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data-custom\") pod \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.714684 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-scripts\") pod \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.714738 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data\") pod \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.714828 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-combined-ca-bundle\") pod \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.714856 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-etc-machine-id\") pod \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.714918 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72xpm\" (UniqueName: \"kubernetes.io/projected/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-kube-api-access-72xpm\") pod \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\" (UID: \"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb\") " Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.714979 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" (UID: "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.715260 5016 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.723975 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" (UID: "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.734793 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-kube-api-access-72xpm" (OuterVolumeSpecName: "kube-api-access-72xpm") pod "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" (UID: "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb"). InnerVolumeSpecName "kube-api-access-72xpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.734929 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-scripts" (OuterVolumeSpecName: "scripts") pod "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" (UID: "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.762820 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" (UID: "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.816825 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.816855 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72xpm\" (UniqueName: \"kubernetes.io/projected/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-kube-api-access-72xpm\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.816867 5016 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.816876 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.834798 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data" (OuterVolumeSpecName: "config-data") pod "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" (UID: "d7381a01-fcca-4cf1-8bd2-394c4ff9eecb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:09 crc kubenswrapper[5016]: I1011 07:57:09.918879 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.145557 5016 generic.go:334] "Generic (PLEG): container finished" podID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerID="0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6" exitCode=0 Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.145609 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-df49866-g5nkl" event={"ID":"771aebc7-25b0-45ef-bbd4-ed6c367b998b","Type":"ContainerDied","Data":"0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6"} Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.148424 5016 generic.go:334] "Generic (PLEG): container finished" podID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerID="0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610" exitCode=0 Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.148460 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb","Type":"ContainerDied","Data":"0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610"} Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.148487 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d7381a01-fcca-4cf1-8bd2-394c4ff9eecb","Type":"ContainerDied","Data":"f19dd6cb5234875ced18d9c52672f954f53d57cc5c991f13fe1856f2651efd9b"} Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.148506 5016 scope.go:117] "RemoveContainer" containerID="5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.148637 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.229311 5016 scope.go:117] "RemoveContainer" containerID="0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.247290 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.260434 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.278934 5016 scope.go:117] "RemoveContainer" containerID="5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.283503 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 11 07:57:10 crc kubenswrapper[5016]: E1011 07:57:10.284001 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerName="horizon" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284026 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerName="horizon" Oct 11 07:57:10 crc kubenswrapper[5016]: E1011 07:57:10.284034 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4\": container with ID starting with 5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4 not found: ID does not exist" containerID="5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284089 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4"} err="failed to get container status \"5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4\": rpc error: code = NotFound desc = could not find container \"5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4\": container with ID starting with 5e7f7e044990ba881a962011a9b4fcd6a39e2a6649835cdd61ccfd8a687350b4 not found: ID does not exist" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284123 5016 scope.go:117] "RemoveContainer" containerID="0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610" Oct 11 07:57:10 crc kubenswrapper[5016]: E1011 07:57:10.284057 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerName="horizon-log" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284245 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerName="horizon-log" Oct 11 07:57:10 crc kubenswrapper[5016]: E1011 07:57:10.284387 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fe6cdc-7b3c-4643-bc8a-664a9c12590e" containerName="dnsmasq-dns" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284406 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fe6cdc-7b3c-4643-bc8a-664a9c12590e" containerName="dnsmasq-dns" Oct 11 07:57:10 crc kubenswrapper[5016]: E1011 07:57:10.284436 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerName="cinder-scheduler" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284445 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerName="cinder-scheduler" Oct 11 07:57:10 crc kubenswrapper[5016]: E1011 07:57:10.284490 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerName="probe" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284503 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerName="probe" Oct 11 07:57:10 crc kubenswrapper[5016]: E1011 07:57:10.284532 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fe6cdc-7b3c-4643-bc8a-664a9c12590e" containerName="init" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284542 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fe6cdc-7b3c-4643-bc8a-664a9c12590e" containerName="init" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284958 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerName="probe" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.284987 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerName="horizon" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.285012 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4acf875d-ca40-47ff-a2e9-cdf09c447232" containerName="horizon-log" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.285023 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="94fe6cdc-7b3c-4643-bc8a-664a9c12590e" containerName="dnsmasq-dns" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.285039 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" containerName="cinder-scheduler" Oct 11 07:57:10 crc kubenswrapper[5016]: E1011 07:57:10.286666 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610\": container with ID starting with 0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610 not found: ID does not exist" containerID="0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.286708 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610"} err="failed to get container status \"0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610\": rpc error: code = NotFound desc = could not find container \"0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610\": container with ID starting with 0c1fed1a1bf47e100c6c9e7097541828c712513bf8770a15d84236d5744bb610 not found: ID does not exist" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.288079 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.291227 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.291854 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.427820 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.427891 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-config-data\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.428061 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-scripts\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.428168 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtb8j\" (UniqueName: \"kubernetes.io/projected/14ae562e-2b57-478f-89cd-8330105eacdf-kube-api-access-wtb8j\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.428226 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14ae562e-2b57-478f-89cd-8330105eacdf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.428261 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.529756 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-config-data\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.529830 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-scripts\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.529863 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtb8j\" (UniqueName: \"kubernetes.io/projected/14ae562e-2b57-478f-89cd-8330105eacdf-kube-api-access-wtb8j\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.529889 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14ae562e-2b57-478f-89cd-8330105eacdf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.529908 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.529978 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.530305 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/14ae562e-2b57-478f-89cd-8330105eacdf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.533441 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.533911 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.534507 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-scripts\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.535592 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14ae562e-2b57-478f-89cd-8330105eacdf-config-data\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.560221 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtb8j\" (UniqueName: \"kubernetes.io/projected/14ae562e-2b57-478f-89cd-8330105eacdf-kube-api-access-wtb8j\") pod \"cinder-scheduler-0\" (UID: \"14ae562e-2b57-478f-89cd-8330105eacdf\") " pod="openstack/cinder-scheduler-0" Oct 11 07:57:10 crc kubenswrapper[5016]: I1011 07:57:10.612105 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 11 07:57:11 crc kubenswrapper[5016]: I1011 07:57:11.055080 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 11 07:57:11 crc kubenswrapper[5016]: I1011 07:57:11.146267 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7381a01-fcca-4cf1-8bd2-394c4ff9eecb" path="/var/lib/kubelet/pods/d7381a01-fcca-4cf1-8bd2-394c4ff9eecb/volumes" Oct 11 07:57:11 crc kubenswrapper[5016]: I1011 07:57:11.171407 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"14ae562e-2b57-478f-89cd-8330105eacdf","Type":"ContainerStarted","Data":"01cdada8224775652865e8aa0af4b98c58809d388a884c89b25f9ff1b8b5b940"} Oct 11 07:57:11 crc kubenswrapper[5016]: I1011 07:57:11.665439 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-df49866-g5nkl" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.134551 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.204403 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"14ae562e-2b57-478f-89cd-8330105eacdf","Type":"ContainerStarted","Data":"e0835f74268868e54c884e3f7f5f6ec3663ed39fc88e0a94c4f71b6ed100fa6f"} Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.206672 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dc8c6b68-jd4p4" event={"ID":"4f6cfa91-27d5-4f55-ba9f-9d61367584ca","Type":"ContainerDied","Data":"19278b670332c5311619044a69cb05747fcd4e340cdfbd1e589100cf0a1e7323"} Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.206746 5016 generic.go:334] "Generic (PLEG): container finished" podID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerID="19278b670332c5311619044a69cb05747fcd4e340cdfbd1e589100cf0a1e7323" exitCode=0 Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.211329 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d89d796fd-cgg68" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.288213 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56fd798d48-f9v6h"] Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.288644 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56fd798d48-f9v6h" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api-log" containerID="cri-o://5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4" gracePeriod=30 Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.289247 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-56fd798d48-f9v6h" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api" containerID="cri-o://abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07" gracePeriod=30 Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.723243 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.798495 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-config\") pod \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.798544 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-combined-ca-bundle\") pod \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.798590 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8g97\" (UniqueName: \"kubernetes.io/projected/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-kube-api-access-s8g97\") pod \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.798755 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-httpd-config\") pod \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.798777 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-ovndb-tls-certs\") pod \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\" (UID: \"4f6cfa91-27d5-4f55-ba9f-9d61367584ca\") " Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.804950 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "4f6cfa91-27d5-4f55-ba9f-9d61367584ca" (UID: "4f6cfa91-27d5-4f55-ba9f-9d61367584ca"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.805837 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-kube-api-access-s8g97" (OuterVolumeSpecName: "kube-api-access-s8g97") pod "4f6cfa91-27d5-4f55-ba9f-9d61367584ca" (UID: "4f6cfa91-27d5-4f55-ba9f-9d61367584ca"). InnerVolumeSpecName "kube-api-access-s8g97". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.863125 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-config" (OuterVolumeSpecName: "config") pod "4f6cfa91-27d5-4f55-ba9f-9d61367584ca" (UID: "4f6cfa91-27d5-4f55-ba9f-9d61367584ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.882624 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "4f6cfa91-27d5-4f55-ba9f-9d61367584ca" (UID: "4f6cfa91-27d5-4f55-ba9f-9d61367584ca"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.900509 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.900732 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8g97\" (UniqueName: \"kubernetes.io/projected/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-kube-api-access-s8g97\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.900804 5016 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.900858 5016 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:12 crc kubenswrapper[5016]: I1011 07:57:12.905130 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f6cfa91-27d5-4f55-ba9f-9d61367584ca" (UID: "4f6cfa91-27d5-4f55-ba9f-9d61367584ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.004689 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f6cfa91-27d5-4f55-ba9f-9d61367584ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.217153 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"14ae562e-2b57-478f-89cd-8330105eacdf","Type":"ContainerStarted","Data":"d13a4ca0b7e37ed5df85f2ebe9d990e5c8d1bf2ab77d05728460b37f5aea42b3"} Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.220224 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59dc8c6b68-jd4p4" event={"ID":"4f6cfa91-27d5-4f55-ba9f-9d61367584ca","Type":"ContainerDied","Data":"9adfcc3ec37847bb4e7f2edfd0a4c73e1f0b21541973682b8a6a48af908d9613"} Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.220283 5016 scope.go:117] "RemoveContainer" containerID="0ff4496cc41b82abf0c84e553e0ad482d2a2d59dcbf267abe776975eb46c7085" Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.220425 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59dc8c6b68-jd4p4" Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.224573 5016 generic.go:334] "Generic (PLEG): container finished" podID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerID="5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4" exitCode=143 Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.224613 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56fd798d48-f9v6h" event={"ID":"a3f50dd7-7f92-4bc4-a99b-e96dd2929067","Type":"ContainerDied","Data":"5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4"} Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.249394 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.249373379 podStartE2EDuration="3.249373379s" podCreationTimestamp="2025-10-11 07:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:57:13.236156883 +0000 UTC m=+1021.136612829" watchObservedRunningTime="2025-10-11 07:57:13.249373379 +0000 UTC m=+1021.149829325" Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.253560 5016 scope.go:117] "RemoveContainer" containerID="19278b670332c5311619044a69cb05747fcd4e340cdfbd1e589100cf0a1e7323" Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.261352 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59dc8c6b68-jd4p4"] Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.272148 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59dc8c6b68-jd4p4"] Oct 11 07:57:13 crc kubenswrapper[5016]: I1011 07:57:13.559272 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.147977 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" path="/var/lib/kubelet/pods/4f6cfa91-27d5-4f55-ba9f-9d61367584ca/volumes" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.330345 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-67cbf6496d-vrr6z" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.444003 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56fd798d48-f9v6h" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:48028->10.217.0.152:9311: read: connection reset by peer" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.444022 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-56fd798d48-f9v6h" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.152:9311/healthcheck\": read tcp 10.217.0.2:48026->10.217.0.152:9311: read: connection reset by peer" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.612408 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.830291 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.967981 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tmwf\" (UniqueName: \"kubernetes.io/projected/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-kube-api-access-5tmwf\") pod \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.968073 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-combined-ca-bundle\") pod \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.968097 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-logs\") pod \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.968195 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data-custom\") pod \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.968216 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data\") pod \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\" (UID: \"a3f50dd7-7f92-4bc4-a99b-e96dd2929067\") " Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.969126 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-logs" (OuterVolumeSpecName: "logs") pod "a3f50dd7-7f92-4bc4-a99b-e96dd2929067" (UID: "a3f50dd7-7f92-4bc4-a99b-e96dd2929067"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.973619 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a3f50dd7-7f92-4bc4-a99b-e96dd2929067" (UID: "a3f50dd7-7f92-4bc4-a99b-e96dd2929067"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.975122 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-kube-api-access-5tmwf" (OuterVolumeSpecName: "kube-api-access-5tmwf") pod "a3f50dd7-7f92-4bc4-a99b-e96dd2929067" (UID: "a3f50dd7-7f92-4bc4-a99b-e96dd2929067"). InnerVolumeSpecName "kube-api-access-5tmwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:15 crc kubenswrapper[5016]: I1011 07:57:15.995135 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3f50dd7-7f92-4bc4-a99b-e96dd2929067" (UID: "a3f50dd7-7f92-4bc4-a99b-e96dd2929067"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.013460 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data" (OuterVolumeSpecName: "config-data") pod "a3f50dd7-7f92-4bc4-a99b-e96dd2929067" (UID: "a3f50dd7-7f92-4bc4-a99b-e96dd2929067"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.072968 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tmwf\" (UniqueName: \"kubernetes.io/projected/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-kube-api-access-5tmwf\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.072998 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.073007 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.073017 5016 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.073026 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f50dd7-7f92-4bc4-a99b-e96dd2929067-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.251872 5016 generic.go:334] "Generic (PLEG): container finished" podID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerID="abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07" exitCode=0 Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.251913 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56fd798d48-f9v6h" event={"ID":"a3f50dd7-7f92-4bc4-a99b-e96dd2929067","Type":"ContainerDied","Data":"abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07"} Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.251961 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56fd798d48-f9v6h" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.251991 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56fd798d48-f9v6h" event={"ID":"a3f50dd7-7f92-4bc4-a99b-e96dd2929067","Type":"ContainerDied","Data":"37937a9384cbb726e29043422ba67f0c3cb416e7a9d60ce0d062e1db4f68849c"} Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.252013 5016 scope.go:117] "RemoveContainer" containerID="abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.278012 5016 scope.go:117] "RemoveContainer" containerID="5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.282881 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-56fd798d48-f9v6h"] Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.290038 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-56fd798d48-f9v6h"] Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.292645 5016 scope.go:117] "RemoveContainer" containerID="abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07" Oct 11 07:57:16 crc kubenswrapper[5016]: E1011 07:57:16.298193 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07\": container with ID starting with abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07 not found: ID does not exist" containerID="abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.298585 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07"} err="failed to get container status \"abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07\": rpc error: code = NotFound desc = could not find container \"abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07\": container with ID starting with abe64c199517db214a38b759d92b202c240e3982f0fc769f207b3a0846e81c07 not found: ID does not exist" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.298626 5016 scope.go:117] "RemoveContainer" containerID="5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4" Oct 11 07:57:16 crc kubenswrapper[5016]: E1011 07:57:16.299408 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4\": container with ID starting with 5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4 not found: ID does not exist" containerID="5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.299459 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4"} err="failed to get container status \"5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4\": rpc error: code = NotFound desc = could not find container \"5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4\": container with ID starting with 5a7e37384b180d0c8ccf91fc4d10ed42e31a12ccf890c09a9641dbb4efdfe3e4 not found: ID does not exist" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.889482 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 11 07:57:16 crc kubenswrapper[5016]: E1011 07:57:16.890104 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerName="neutron-api" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.890131 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerName="neutron-api" Oct 11 07:57:16 crc kubenswrapper[5016]: E1011 07:57:16.890153 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.890167 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api" Oct 11 07:57:16 crc kubenswrapper[5016]: E1011 07:57:16.890205 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerName="neutron-httpd" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.890219 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerName="neutron-httpd" Oct 11 07:57:16 crc kubenswrapper[5016]: E1011 07:57:16.890240 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api-log" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.890253 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api-log" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.890583 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerName="neutron-api" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.890617 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f6cfa91-27d5-4f55-ba9f-9d61367584ca" containerName="neutron-httpd" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.890644 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api-log" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.890744 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" containerName="barbican-api" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.891690 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.894895 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.894926 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.895004 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-t4tq8" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.896115 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.988182 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config-secret\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.988246 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfwr\" (UniqueName: \"kubernetes.io/projected/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-kube-api-access-qrfwr\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.988265 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:16 crc kubenswrapper[5016]: I1011 07:57:16.988487 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.089839 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config-secret\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.089903 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfwr\" (UniqueName: \"kubernetes.io/projected/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-kube-api-access-qrfwr\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.089924 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.089990 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.090797 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.094832 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.095202 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config-secret\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.109296 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfwr\" (UniqueName: \"kubernetes.io/projected/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-kube-api-access-qrfwr\") pod \"openstackclient\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.180363 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f50dd7-7f92-4bc4-a99b-e96dd2929067" path="/var/lib/kubelet/pods/a3f50dd7-7f92-4bc4-a99b-e96dd2929067/volumes" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.182279 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.191576 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.207840 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.217126 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.218400 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.227418 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 11 07:57:17 crc kubenswrapper[5016]: E1011 07:57:17.311111 5016 log.go:32] "RunPodSandbox from runtime service failed" err=< Oct 11 07:57:17 crc kubenswrapper[5016]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_b121fb9f-06b7-4493-ac85-1fe9b48a75ba_0(4aab4c216847ae073292d4a8ac44013680bf265ccac66e7c5986a52ddaffe8ff): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4aab4c216847ae073292d4a8ac44013680bf265ccac66e7c5986a52ddaffe8ff" Netns:"/var/run/netns/45d8a84b-23cf-427c-82ef-b0793bffa3c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4aab4c216847ae073292d4a8ac44013680bf265ccac66e7c5986a52ddaffe8ff;K8S_POD_UID=b121fb9f-06b7-4493-ac85-1fe9b48a75ba" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/b121fb9f-06b7-4493-ac85-1fe9b48a75ba]: expected pod UID "b121fb9f-06b7-4493-ac85-1fe9b48a75ba" but got "51360c57-7d92-4171-a855-69ba399ac0b7" from Kube API Oct 11 07:57:17 crc kubenswrapper[5016]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Oct 11 07:57:17 crc kubenswrapper[5016]: > Oct 11 07:57:17 crc kubenswrapper[5016]: E1011 07:57:17.311470 5016 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Oct 11 07:57:17 crc kubenswrapper[5016]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_b121fb9f-06b7-4493-ac85-1fe9b48a75ba_0(4aab4c216847ae073292d4a8ac44013680bf265ccac66e7c5986a52ddaffe8ff): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4aab4c216847ae073292d4a8ac44013680bf265ccac66e7c5986a52ddaffe8ff" Netns:"/var/run/netns/45d8a84b-23cf-427c-82ef-b0793bffa3c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=4aab4c216847ae073292d4a8ac44013680bf265ccac66e7c5986a52ddaffe8ff;K8S_POD_UID=b121fb9f-06b7-4493-ac85-1fe9b48a75ba" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/b121fb9f-06b7-4493-ac85-1fe9b48a75ba]: expected pod UID "b121fb9f-06b7-4493-ac85-1fe9b48a75ba" but got "51360c57-7d92-4171-a855-69ba399ac0b7" from Kube API Oct 11 07:57:17 crc kubenswrapper[5016]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Oct 11 07:57:17 crc kubenswrapper[5016]: > pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.399816 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/51360c57-7d92-4171-a855-69ba399ac0b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.399865 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgr7j\" (UniqueName: \"kubernetes.io/projected/51360c57-7d92-4171-a855-69ba399ac0b7-kube-api-access-vgr7j\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.400177 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51360c57-7d92-4171-a855-69ba399ac0b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.400338 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/51360c57-7d92-4171-a855-69ba399ac0b7-openstack-config\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.502124 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/51360c57-7d92-4171-a855-69ba399ac0b7-openstack-config\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.502227 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/51360c57-7d92-4171-a855-69ba399ac0b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.502252 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgr7j\" (UniqueName: \"kubernetes.io/projected/51360c57-7d92-4171-a855-69ba399ac0b7-kube-api-access-vgr7j\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.502334 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51360c57-7d92-4171-a855-69ba399ac0b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.503206 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/51360c57-7d92-4171-a855-69ba399ac0b7-openstack-config\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.508383 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/51360c57-7d92-4171-a855-69ba399ac0b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.513309 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51360c57-7d92-4171-a855-69ba399ac0b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.527986 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgr7j\" (UniqueName: \"kubernetes.io/projected/51360c57-7d92-4171-a855-69ba399ac0b7-kube-api-access-vgr7j\") pod \"openstackclient\" (UID: \"51360c57-7d92-4171-a855-69ba399ac0b7\") " pod="openstack/openstackclient" Oct 11 07:57:17 crc kubenswrapper[5016]: I1011 07:57:17.601938 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.036544 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.268483 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.268473 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"51360c57-7d92-4171-a855-69ba399ac0b7","Type":"ContainerStarted","Data":"c34d742c0060e9ed7a747315d8374ecae8deb9285a919b8b3d6c55fa23578f6f"} Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.271690 5016 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b121fb9f-06b7-4493-ac85-1fe9b48a75ba" podUID="51360c57-7d92-4171-a855-69ba399ac0b7" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.279289 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.421977 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-combined-ca-bundle\") pod \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.422041 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config-secret\") pod \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.422102 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrfwr\" (UniqueName: \"kubernetes.io/projected/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-kube-api-access-qrfwr\") pod \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.422154 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config\") pod \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\" (UID: \"b121fb9f-06b7-4493-ac85-1fe9b48a75ba\") " Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.422799 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b121fb9f-06b7-4493-ac85-1fe9b48a75ba" (UID: "b121fb9f-06b7-4493-ac85-1fe9b48a75ba"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.431825 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b121fb9f-06b7-4493-ac85-1fe9b48a75ba" (UID: "b121fb9f-06b7-4493-ac85-1fe9b48a75ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.431848 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b121fb9f-06b7-4493-ac85-1fe9b48a75ba" (UID: "b121fb9f-06b7-4493-ac85-1fe9b48a75ba"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.433777 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-kube-api-access-qrfwr" (OuterVolumeSpecName: "kube-api-access-qrfwr") pod "b121fb9f-06b7-4493-ac85-1fe9b48a75ba" (UID: "b121fb9f-06b7-4493-ac85-1fe9b48a75ba"). InnerVolumeSpecName "kube-api-access-qrfwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.524294 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.524325 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.524337 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrfwr\" (UniqueName: \"kubernetes.io/projected/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-kube-api-access-qrfwr\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:18 crc kubenswrapper[5016]: I1011 07:57:18.524345 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b121fb9f-06b7-4493-ac85-1fe9b48a75ba-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:19 crc kubenswrapper[5016]: I1011 07:57:19.067002 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:57:19 crc kubenswrapper[5016]: I1011 07:57:19.071722 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6695bc58f4-lkxqb" Oct 11 07:57:19 crc kubenswrapper[5016]: I1011 07:57:19.150057 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b121fb9f-06b7-4493-ac85-1fe9b48a75ba" path="/var/lib/kubelet/pods/b121fb9f-06b7-4493-ac85-1fe9b48a75ba/volumes" Oct 11 07:57:19 crc kubenswrapper[5016]: I1011 07:57:19.279161 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 11 07:57:19 crc kubenswrapper[5016]: I1011 07:57:19.286425 5016 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="b121fb9f-06b7-4493-ac85-1fe9b48a75ba" podUID="51360c57-7d92-4171-a855-69ba399ac0b7" Oct 11 07:57:20 crc kubenswrapper[5016]: I1011 07:57:20.897215 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 11 07:57:21 crc kubenswrapper[5016]: I1011 07:57:21.666053 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-df49866-g5nkl" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Oct 11 07:57:23 crc kubenswrapper[5016]: I1011 07:57:23.810225 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-5d7xn"] Oct 11 07:57:23 crc kubenswrapper[5016]: I1011 07:57:23.814243 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5d7xn" Oct 11 07:57:23 crc kubenswrapper[5016]: I1011 07:57:23.834045 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5d7xn"] Oct 11 07:57:23 crc kubenswrapper[5016]: I1011 07:57:23.907251 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-88p4q"] Oct 11 07:57:23 crc kubenswrapper[5016]: I1011 07:57:23.909158 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-88p4q" Oct 11 07:57:23 crc kubenswrapper[5016]: I1011 07:57:23.922338 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-88p4q"] Oct 11 07:57:23 crc kubenswrapper[5016]: I1011 07:57:23.940764 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwssw\" (UniqueName: \"kubernetes.io/projected/fc4a31f9-083f-49bf-866a-b6b970910e4d-kube-api-access-zwssw\") pod \"nova-api-db-create-5d7xn\" (UID: \"fc4a31f9-083f-49bf-866a-b6b970910e4d\") " pod="openstack/nova-api-db-create-5d7xn" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.001040 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mcv5t"] Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.002939 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mcv5t" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.007735 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mcv5t"] Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.041889 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwssw\" (UniqueName: \"kubernetes.io/projected/fc4a31f9-083f-49bf-866a-b6b970910e4d-kube-api-access-zwssw\") pod \"nova-api-db-create-5d7xn\" (UID: \"fc4a31f9-083f-49bf-866a-b6b970910e4d\") " pod="openstack/nova-api-db-create-5d7xn" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.041982 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfc6c\" (UniqueName: \"kubernetes.io/projected/c70d5c8e-2545-45f9-ba1a-f4f1755f3729-kube-api-access-pfc6c\") pod \"nova-cell0-db-create-88p4q\" (UID: \"c70d5c8e-2545-45f9-ba1a-f4f1755f3729\") " pod="openstack/nova-cell0-db-create-88p4q" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.063261 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwssw\" (UniqueName: \"kubernetes.io/projected/fc4a31f9-083f-49bf-866a-b6b970910e4d-kube-api-access-zwssw\") pod \"nova-api-db-create-5d7xn\" (UID: \"fc4a31f9-083f-49bf-866a-b6b970910e4d\") " pod="openstack/nova-api-db-create-5d7xn" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.143182 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zd5\" (UniqueName: \"kubernetes.io/projected/4fcfe522-4dc0-41a6-b29d-75f00142585e-kube-api-access-n4zd5\") pod \"nova-cell1-db-create-mcv5t\" (UID: \"4fcfe522-4dc0-41a6-b29d-75f00142585e\") " pod="openstack/nova-cell1-db-create-mcv5t" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.143272 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfc6c\" (UniqueName: \"kubernetes.io/projected/c70d5c8e-2545-45f9-ba1a-f4f1755f3729-kube-api-access-pfc6c\") pod \"nova-cell0-db-create-88p4q\" (UID: \"c70d5c8e-2545-45f9-ba1a-f4f1755f3729\") " pod="openstack/nova-cell0-db-create-88p4q" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.165365 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfc6c\" (UniqueName: \"kubernetes.io/projected/c70d5c8e-2545-45f9-ba1a-f4f1755f3729-kube-api-access-pfc6c\") pod \"nova-cell0-db-create-88p4q\" (UID: \"c70d5c8e-2545-45f9-ba1a-f4f1755f3729\") " pod="openstack/nova-cell0-db-create-88p4q" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.172936 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5d7xn" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.232920 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-88p4q" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.245283 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zd5\" (UniqueName: \"kubernetes.io/projected/4fcfe522-4dc0-41a6-b29d-75f00142585e-kube-api-access-n4zd5\") pod \"nova-cell1-db-create-mcv5t\" (UID: \"4fcfe522-4dc0-41a6-b29d-75f00142585e\") " pod="openstack/nova-cell1-db-create-mcv5t" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.261351 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zd5\" (UniqueName: \"kubernetes.io/projected/4fcfe522-4dc0-41a6-b29d-75f00142585e-kube-api-access-n4zd5\") pod \"nova-cell1-db-create-mcv5t\" (UID: \"4fcfe522-4dc0-41a6-b29d-75f00142585e\") " pod="openstack/nova-cell1-db-create-mcv5t" Oct 11 07:57:24 crc kubenswrapper[5016]: I1011 07:57:24.325392 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mcv5t" Oct 11 07:57:28 crc kubenswrapper[5016]: I1011 07:57:28.433685 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 11 07:57:28 crc kubenswrapper[5016]: I1011 07:57:28.634565 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mcv5t"] Oct 11 07:57:28 crc kubenswrapper[5016]: I1011 07:57:28.778114 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5d7xn"] Oct 11 07:57:28 crc kubenswrapper[5016]: I1011 07:57:28.856262 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-88p4q"] Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.378482 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"51360c57-7d92-4171-a855-69ba399ac0b7","Type":"ContainerStarted","Data":"0ef711ef16b81e0bb890fb269519c89f86daacd854a68a5e2bf84d61e0709cb0"} Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.381637 5016 generic.go:334] "Generic (PLEG): container finished" podID="fc4a31f9-083f-49bf-866a-b6b970910e4d" containerID="87be5ce31d3e28f5815f05515ba4b51ae80935358f78bc6fb1e44c922f4c4073" exitCode=0 Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.381698 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5d7xn" event={"ID":"fc4a31f9-083f-49bf-866a-b6b970910e4d","Type":"ContainerDied","Data":"87be5ce31d3e28f5815f05515ba4b51ae80935358f78bc6fb1e44c922f4c4073"} Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.381757 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5d7xn" event={"ID":"fc4a31f9-083f-49bf-866a-b6b970910e4d","Type":"ContainerStarted","Data":"d3acc0ad563a52adbc24d0b7491d4fd70f7015ce1b51971a8924e4d079cec018"} Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.383304 5016 generic.go:334] "Generic (PLEG): container finished" podID="c70d5c8e-2545-45f9-ba1a-f4f1755f3729" containerID="d64ba490de862b1225ef273b9eb6996c4d19b9b0ff4e9fa0247d7f8bf6064bef" exitCode=0 Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.383361 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-88p4q" event={"ID":"c70d5c8e-2545-45f9-ba1a-f4f1755f3729","Type":"ContainerDied","Data":"d64ba490de862b1225ef273b9eb6996c4d19b9b0ff4e9fa0247d7f8bf6064bef"} Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.383386 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-88p4q" event={"ID":"c70d5c8e-2545-45f9-ba1a-f4f1755f3729","Type":"ContainerStarted","Data":"ce0c395ae41b525744388af759444b993bd85d3f1cb9062c13c419f18e0175b5"} Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.384882 5016 generic.go:334] "Generic (PLEG): container finished" podID="4fcfe522-4dc0-41a6-b29d-75f00142585e" containerID="150ea039023a37f879adb30746da7e608610712e45a95a0f41dc819e296a338b" exitCode=0 Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.384927 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mcv5t" event={"ID":"4fcfe522-4dc0-41a6-b29d-75f00142585e","Type":"ContainerDied","Data":"150ea039023a37f879adb30746da7e608610712e45a95a0f41dc819e296a338b"} Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.384952 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mcv5t" event={"ID":"4fcfe522-4dc0-41a6-b29d-75f00142585e","Type":"ContainerStarted","Data":"f86972111bded07a09435e3947af9f8eac93faae444b5ba6f490e93e9f7c04ea"} Oct 11 07:57:29 crc kubenswrapper[5016]: I1011 07:57:29.396311 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.238140706 podStartE2EDuration="12.396298598s" podCreationTimestamp="2025-10-11 07:57:17 +0000 UTC" firstStartedPulling="2025-10-11 07:57:18.035117411 +0000 UTC m=+1025.935573357" lastFinishedPulling="2025-10-11 07:57:28.193275293 +0000 UTC m=+1036.093731249" observedRunningTime="2025-10-11 07:57:29.392624249 +0000 UTC m=+1037.293080205" watchObservedRunningTime="2025-10-11 07:57:29.396298598 +0000 UTC m=+1037.296754534" Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.277936 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.278558 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="ceilometer-central-agent" containerID="cri-o://34c46232123836b5dbe77b0f02f0c79445c6fc10cca762fd120cd8718bd18cd0" gracePeriod=30 Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.278676 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="sg-core" containerID="cri-o://3df9b73419a05c014c37a7fbdf074076fd8a87ff3695a3c2382769cf7b713e05" gracePeriod=30 Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.278686 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="ceilometer-notification-agent" containerID="cri-o://20f54aed6e9195da8cb6b9968a4eeb0add55f89bbbf7626986a8b8a31b17f2c7" gracePeriod=30 Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.279174 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="proxy-httpd" containerID="cri-o://5851a90d93e6ab43b92d30a327cf39693067d04f573744cd4c4df7df7e24b86e" gracePeriod=30 Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.854063 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mcv5t" Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.868995 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5d7xn" Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.876881 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-88p4q" Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.974786 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4zd5\" (UniqueName: \"kubernetes.io/projected/4fcfe522-4dc0-41a6-b29d-75f00142585e-kube-api-access-n4zd5\") pod \"4fcfe522-4dc0-41a6-b29d-75f00142585e\" (UID: \"4fcfe522-4dc0-41a6-b29d-75f00142585e\") " Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.974992 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwssw\" (UniqueName: \"kubernetes.io/projected/fc4a31f9-083f-49bf-866a-b6b970910e4d-kube-api-access-zwssw\") pod \"fc4a31f9-083f-49bf-866a-b6b970910e4d\" (UID: \"fc4a31f9-083f-49bf-866a-b6b970910e4d\") " Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.975043 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfc6c\" (UniqueName: \"kubernetes.io/projected/c70d5c8e-2545-45f9-ba1a-f4f1755f3729-kube-api-access-pfc6c\") pod \"c70d5c8e-2545-45f9-ba1a-f4f1755f3729\" (UID: \"c70d5c8e-2545-45f9-ba1a-f4f1755f3729\") " Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.981946 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fcfe522-4dc0-41a6-b29d-75f00142585e-kube-api-access-n4zd5" (OuterVolumeSpecName: "kube-api-access-n4zd5") pod "4fcfe522-4dc0-41a6-b29d-75f00142585e" (UID: "4fcfe522-4dc0-41a6-b29d-75f00142585e"). InnerVolumeSpecName "kube-api-access-n4zd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.981956 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc4a31f9-083f-49bf-866a-b6b970910e4d-kube-api-access-zwssw" (OuterVolumeSpecName: "kube-api-access-zwssw") pod "fc4a31f9-083f-49bf-866a-b6b970910e4d" (UID: "fc4a31f9-083f-49bf-866a-b6b970910e4d"). InnerVolumeSpecName "kube-api-access-zwssw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:30 crc kubenswrapper[5016]: I1011 07:57:30.982627 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70d5c8e-2545-45f9-ba1a-f4f1755f3729-kube-api-access-pfc6c" (OuterVolumeSpecName: "kube-api-access-pfc6c") pod "c70d5c8e-2545-45f9-ba1a-f4f1755f3729" (UID: "c70d5c8e-2545-45f9-ba1a-f4f1755f3729"). InnerVolumeSpecName "kube-api-access-pfc6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.077437 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwssw\" (UniqueName: \"kubernetes.io/projected/fc4a31f9-083f-49bf-866a-b6b970910e4d-kube-api-access-zwssw\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.077461 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfc6c\" (UniqueName: \"kubernetes.io/projected/c70d5c8e-2545-45f9-ba1a-f4f1755f3729-kube-api-access-pfc6c\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.077472 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4zd5\" (UniqueName: \"kubernetes.io/projected/4fcfe522-4dc0-41a6-b29d-75f00142585e-kube-api-access-n4zd5\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.415869 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-88p4q" event={"ID":"c70d5c8e-2545-45f9-ba1a-f4f1755f3729","Type":"ContainerDied","Data":"ce0c395ae41b525744388af759444b993bd85d3f1cb9062c13c419f18e0175b5"} Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.415909 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce0c395ae41b525744388af759444b993bd85d3f1cb9062c13c419f18e0175b5" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.415961 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-88p4q" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.419889 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mcv5t" event={"ID":"4fcfe522-4dc0-41a6-b29d-75f00142585e","Type":"ContainerDied","Data":"f86972111bded07a09435e3947af9f8eac93faae444b5ba6f490e93e9f7c04ea"} Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.419925 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86972111bded07a09435e3947af9f8eac93faae444b5ba6f490e93e9f7c04ea" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.419983 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mcv5t" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.424323 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5d7xn" event={"ID":"fc4a31f9-083f-49bf-866a-b6b970910e4d","Type":"ContainerDied","Data":"d3acc0ad563a52adbc24d0b7491d4fd70f7015ce1b51971a8924e4d079cec018"} Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.424352 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3acc0ad563a52adbc24d0b7491d4fd70f7015ce1b51971a8924e4d079cec018" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.424394 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5d7xn" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.443870 5016 generic.go:334] "Generic (PLEG): container finished" podID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerID="5851a90d93e6ab43b92d30a327cf39693067d04f573744cd4c4df7df7e24b86e" exitCode=0 Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.443900 5016 generic.go:334] "Generic (PLEG): container finished" podID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerID="3df9b73419a05c014c37a7fbdf074076fd8a87ff3695a3c2382769cf7b713e05" exitCode=2 Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.443908 5016 generic.go:334] "Generic (PLEG): container finished" podID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerID="34c46232123836b5dbe77b0f02f0c79445c6fc10cca762fd120cd8718bd18cd0" exitCode=0 Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.443937 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerDied","Data":"5851a90d93e6ab43b92d30a327cf39693067d04f573744cd4c4df7df7e24b86e"} Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.443960 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerDied","Data":"3df9b73419a05c014c37a7fbdf074076fd8a87ff3695a3c2382769cf7b713e05"} Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.443971 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerDied","Data":"34c46232123836b5dbe77b0f02f0c79445c6fc10cca762fd120cd8718bd18cd0"} Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.667539 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-df49866-g5nkl" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.668621 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-df49866-g5nkl" Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.780244 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 07:57:31 crc kubenswrapper[5016]: I1011 07:57:31.780513 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d360ab05-372a-4b41-8abb-2c2b4257123c" containerName="kube-state-metrics" containerID="cri-o://bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66" gracePeriod=30 Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.286344 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.403387 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbq2x\" (UniqueName: \"kubernetes.io/projected/d360ab05-372a-4b41-8abb-2c2b4257123c-kube-api-access-rbq2x\") pod \"d360ab05-372a-4b41-8abb-2c2b4257123c\" (UID: \"d360ab05-372a-4b41-8abb-2c2b4257123c\") " Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.434871 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d360ab05-372a-4b41-8abb-2c2b4257123c-kube-api-access-rbq2x" (OuterVolumeSpecName: "kube-api-access-rbq2x") pod "d360ab05-372a-4b41-8abb-2c2b4257123c" (UID: "d360ab05-372a-4b41-8abb-2c2b4257123c"). InnerVolumeSpecName "kube-api-access-rbq2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.466809 5016 generic.go:334] "Generic (PLEG): container finished" podID="d360ab05-372a-4b41-8abb-2c2b4257123c" containerID="bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66" exitCode=2 Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.466885 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d360ab05-372a-4b41-8abb-2c2b4257123c","Type":"ContainerDied","Data":"bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66"} Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.466914 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d360ab05-372a-4b41-8abb-2c2b4257123c","Type":"ContainerDied","Data":"fca8ac86458843fecda9a0df16a3a7992351fd5b403adaff45207bbbacb3e78b"} Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.466932 5016 scope.go:117] "RemoveContainer" containerID="bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.467063 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.479127 5016 generic.go:334] "Generic (PLEG): container finished" podID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerID="20f54aed6e9195da8cb6b9968a4eeb0add55f89bbbf7626986a8b8a31b17f2c7" exitCode=0 Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.479171 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerDied","Data":"20f54aed6e9195da8cb6b9968a4eeb0add55f89bbbf7626986a8b8a31b17f2c7"} Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.507847 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbq2x\" (UniqueName: \"kubernetes.io/projected/d360ab05-372a-4b41-8abb-2c2b4257123c-kube-api-access-rbq2x\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.512870 5016 scope.go:117] "RemoveContainer" containerID="bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66" Oct 11 07:57:32 crc kubenswrapper[5016]: E1011 07:57:32.514273 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66\": container with ID starting with bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66 not found: ID does not exist" containerID="bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.514371 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66"} err="failed to get container status \"bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66\": rpc error: code = NotFound desc = could not find container \"bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66\": container with ID starting with bd5abb83dd30ddd5271a1a2076727e9f28366092b890a3b8fce1cad1cbaa4b66 not found: ID does not exist" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.523704 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.542483 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.555288 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 07:57:32 crc kubenswrapper[5016]: E1011 07:57:32.555801 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fcfe522-4dc0-41a6-b29d-75f00142585e" containerName="mariadb-database-create" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.555826 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fcfe522-4dc0-41a6-b29d-75f00142585e" containerName="mariadb-database-create" Oct 11 07:57:32 crc kubenswrapper[5016]: E1011 07:57:32.555839 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc4a31f9-083f-49bf-866a-b6b970910e4d" containerName="mariadb-database-create" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.555848 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc4a31f9-083f-49bf-866a-b6b970910e4d" containerName="mariadb-database-create" Oct 11 07:57:32 crc kubenswrapper[5016]: E1011 07:57:32.555916 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70d5c8e-2545-45f9-ba1a-f4f1755f3729" containerName="mariadb-database-create" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.555925 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70d5c8e-2545-45f9-ba1a-f4f1755f3729" containerName="mariadb-database-create" Oct 11 07:57:32 crc kubenswrapper[5016]: E1011 07:57:32.555954 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d360ab05-372a-4b41-8abb-2c2b4257123c" containerName="kube-state-metrics" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.555962 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d360ab05-372a-4b41-8abb-2c2b4257123c" containerName="kube-state-metrics" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.556168 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c70d5c8e-2545-45f9-ba1a-f4f1755f3729" containerName="mariadb-database-create" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.556189 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc4a31f9-083f-49bf-866a-b6b970910e4d" containerName="mariadb-database-create" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.556199 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fcfe522-4dc0-41a6-b29d-75f00142585e" containerName="mariadb-database-create" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.556227 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d360ab05-372a-4b41-8abb-2c2b4257123c" containerName="kube-state-metrics" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.556993 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.561533 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.561948 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.565312 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.620990 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711031 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-sg-core-conf-yaml\") pod \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711111 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-run-httpd\") pod \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711130 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-log-httpd\") pod \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711196 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-scripts\") pod \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711320 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-combined-ca-bundle\") pod \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711370 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-config-data\") pod \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711409 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwdxl\" (UniqueName: \"kubernetes.io/projected/0bc0b78e-f920-4af6-901f-ef0d92d9b046-kube-api-access-dwdxl\") pod \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\" (UID: \"0bc0b78e-f920-4af6-901f-ef0d92d9b046\") " Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711696 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab9562f-f510-4edb-b4a5-5a05687424f8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711753 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97tf\" (UniqueName: \"kubernetes.io/projected/7ab9562f-f510-4edb-b4a5-5a05687424f8-kube-api-access-z97tf\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711789 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7ab9562f-f510-4edb-b4a5-5a05687424f8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.711830 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ab9562f-f510-4edb-b4a5-5a05687424f8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.712016 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0bc0b78e-f920-4af6-901f-ef0d92d9b046" (UID: "0bc0b78e-f920-4af6-901f-ef0d92d9b046"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.712882 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0bc0b78e-f920-4af6-901f-ef0d92d9b046" (UID: "0bc0b78e-f920-4af6-901f-ef0d92d9b046"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.715859 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-scripts" (OuterVolumeSpecName: "scripts") pod "0bc0b78e-f920-4af6-901f-ef0d92d9b046" (UID: "0bc0b78e-f920-4af6-901f-ef0d92d9b046"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.743909 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc0b78e-f920-4af6-901f-ef0d92d9b046-kube-api-access-dwdxl" (OuterVolumeSpecName: "kube-api-access-dwdxl") pod "0bc0b78e-f920-4af6-901f-ef0d92d9b046" (UID: "0bc0b78e-f920-4af6-901f-ef0d92d9b046"). InnerVolumeSpecName "kube-api-access-dwdxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.759127 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0bc0b78e-f920-4af6-901f-ef0d92d9b046" (UID: "0bc0b78e-f920-4af6-901f-ef0d92d9b046"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.816941 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7ab9562f-f510-4edb-b4a5-5a05687424f8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.817029 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ab9562f-f510-4edb-b4a5-5a05687424f8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.817126 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab9562f-f510-4edb-b4a5-5a05687424f8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.817183 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97tf\" (UniqueName: \"kubernetes.io/projected/7ab9562f-f510-4edb-b4a5-5a05687424f8-kube-api-access-z97tf\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.817249 5016 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.817264 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.817277 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwdxl\" (UniqueName: \"kubernetes.io/projected/0bc0b78e-f920-4af6-901f-ef0d92d9b046-kube-api-access-dwdxl\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.817287 5016 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.817298 5016 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0bc0b78e-f920-4af6-901f-ef0d92d9b046-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.822589 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab9562f-f510-4edb-b4a5-5a05687424f8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.823929 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ab9562f-f510-4edb-b4a5-5a05687424f8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.830288 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/7ab9562f-f510-4edb-b4a5-5a05687424f8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.833268 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97tf\" (UniqueName: \"kubernetes.io/projected/7ab9562f-f510-4edb-b4a5-5a05687424f8-kube-api-access-z97tf\") pod \"kube-state-metrics-0\" (UID: \"7ab9562f-f510-4edb-b4a5-5a05687424f8\") " pod="openstack/kube-state-metrics-0" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.834355 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bc0b78e-f920-4af6-901f-ef0d92d9b046" (UID: "0bc0b78e-f920-4af6-901f-ef0d92d9b046"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.882031 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-config-data" (OuterVolumeSpecName: "config-data") pod "0bc0b78e-f920-4af6-901f-ef0d92d9b046" (UID: "0bc0b78e-f920-4af6-901f-ef0d92d9b046"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.919839 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.919882 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc0b78e-f920-4af6-901f-ef0d92d9b046-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:32 crc kubenswrapper[5016]: I1011 07:57:32.932367 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.145556 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d360ab05-372a-4b41-8abb-2c2b4257123c" path="/var/lib/kubelet/pods/d360ab05-372a-4b41-8abb-2c2b4257123c/volumes" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.410588 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.490061 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7ab9562f-f510-4edb-b4a5-5a05687424f8","Type":"ContainerStarted","Data":"07ed60e497e77c9bffbeb6b14a4eb22fa0c95401dc655afe2ff87275cb736cc4"} Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.493205 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0bc0b78e-f920-4af6-901f-ef0d92d9b046","Type":"ContainerDied","Data":"59feade632ebbe148127c3a76ab34526111b9d639307c40180d4aaa3dcd126d4"} Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.493275 5016 scope.go:117] "RemoveContainer" containerID="5851a90d93e6ab43b92d30a327cf39693067d04f573744cd4c4df7df7e24b86e" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.493330 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.511443 5016 scope.go:117] "RemoveContainer" containerID="3df9b73419a05c014c37a7fbdf074076fd8a87ff3695a3c2382769cf7b713e05" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.515872 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.527759 5016 scope.go:117] "RemoveContainer" containerID="20f54aed6e9195da8cb6b9968a4eeb0add55f89bbbf7626986a8b8a31b17f2c7" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.529086 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.539494 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:33 crc kubenswrapper[5016]: E1011 07:57:33.539860 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="sg-core" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.539879 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="sg-core" Oct 11 07:57:33 crc kubenswrapper[5016]: E1011 07:57:33.539897 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="ceilometer-notification-agent" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.539905 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="ceilometer-notification-agent" Oct 11 07:57:33 crc kubenswrapper[5016]: E1011 07:57:33.539947 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="ceilometer-central-agent" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.539954 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="ceilometer-central-agent" Oct 11 07:57:33 crc kubenswrapper[5016]: E1011 07:57:33.539963 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="proxy-httpd" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.539970 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="proxy-httpd" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.540114 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="ceilometer-central-agent" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.540130 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="proxy-httpd" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.540143 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="ceilometer-notification-agent" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.540160 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" containerName="sg-core" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.542389 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.544429 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.544724 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.545627 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.554589 5016 scope.go:117] "RemoveContainer" containerID="34c46232123836b5dbe77b0f02f0c79445c6fc10cca762fd120cd8718bd18cd0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.563171 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.638913 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.639011 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-config-data\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.639033 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-scripts\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.639198 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.639287 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.639972 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.640022 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.640108 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhckl\" (UniqueName: \"kubernetes.io/projected/f9d45776-4422-4a0a-a656-7147b88f6f9b-kube-api-access-qhckl\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.741315 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.741407 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.741441 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.741476 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhckl\" (UniqueName: \"kubernetes.io/projected/f9d45776-4422-4a0a-a656-7147b88f6f9b-kube-api-access-qhckl\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.741515 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.741611 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-config-data\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.741643 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-scripts\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.741723 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.742124 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-run-httpd\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.742178 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-log-httpd\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.746393 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-config-data\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.746998 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.750931 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-scripts\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.751032 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.751462 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.762927 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhckl\" (UniqueName: \"kubernetes.io/projected/f9d45776-4422-4a0a-a656-7147b88f6f9b-kube-api-access-qhckl\") pod \"ceilometer-0\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.860430 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.958124 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3e52-account-create-5w9qv"] Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.960454 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e52-account-create-5w9qv" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.963292 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 11 07:57:33 crc kubenswrapper[5016]: I1011 07:57:33.970354 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3e52-account-create-5w9qv"] Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.046398 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfhb4\" (UniqueName: \"kubernetes.io/projected/a83ef869-3d57-4f37-aba9-d279183b0413-kube-api-access-jfhb4\") pod \"nova-api-3e52-account-create-5w9qv\" (UID: \"a83ef869-3d57-4f37-aba9-d279183b0413\") " pod="openstack/nova-api-3e52-account-create-5w9qv" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.068906 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6797-account-create-5xbpq"] Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.070142 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6797-account-create-5xbpq" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.072573 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.093802 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6797-account-create-5xbpq"] Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.148537 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7mxs\" (UniqueName: \"kubernetes.io/projected/f75907a7-0421-4c6e-8cf9-d196d3c8c0e6-kube-api-access-g7mxs\") pod \"nova-cell0-6797-account-create-5xbpq\" (UID: \"f75907a7-0421-4c6e-8cf9-d196d3c8c0e6\") " pod="openstack/nova-cell0-6797-account-create-5xbpq" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.148764 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfhb4\" (UniqueName: \"kubernetes.io/projected/a83ef869-3d57-4f37-aba9-d279183b0413-kube-api-access-jfhb4\") pod \"nova-api-3e52-account-create-5w9qv\" (UID: \"a83ef869-3d57-4f37-aba9-d279183b0413\") " pod="openstack/nova-api-3e52-account-create-5w9qv" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.169316 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfhb4\" (UniqueName: \"kubernetes.io/projected/a83ef869-3d57-4f37-aba9-d279183b0413-kube-api-access-jfhb4\") pod \"nova-api-3e52-account-create-5w9qv\" (UID: \"a83ef869-3d57-4f37-aba9-d279183b0413\") " pod="openstack/nova-api-3e52-account-create-5w9qv" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.249726 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7mxs\" (UniqueName: \"kubernetes.io/projected/f75907a7-0421-4c6e-8cf9-d196d3c8c0e6-kube-api-access-g7mxs\") pod \"nova-cell0-6797-account-create-5xbpq\" (UID: \"f75907a7-0421-4c6e-8cf9-d196d3c8c0e6\") " pod="openstack/nova-cell0-6797-account-create-5xbpq" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.287942 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e52-account-create-5w9qv" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.289235 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7mxs\" (UniqueName: \"kubernetes.io/projected/f75907a7-0421-4c6e-8cf9-d196d3c8c0e6-kube-api-access-g7mxs\") pod \"nova-cell0-6797-account-create-5xbpq\" (UID: \"f75907a7-0421-4c6e-8cf9-d196d3c8c0e6\") " pod="openstack/nova-cell0-6797-account-create-5xbpq" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.388970 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.395885 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6797-account-create-5xbpq" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.527640 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerStarted","Data":"881ab1e352f42927d04a8aff7d4c9a71c67674e35b08acdf304f0e22582ff880"} Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.534262 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7ab9562f-f510-4edb-b4a5-5a05687424f8","Type":"ContainerStarted","Data":"3c4548f205e3a94db2f9f66cfa932c8a418fcc44f7a4e6aab07c0de0e8cc604b"} Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.535508 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.567082 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.146705652 podStartE2EDuration="2.567061622s" podCreationTimestamp="2025-10-11 07:57:32 +0000 UTC" firstStartedPulling="2025-10-11 07:57:33.410521026 +0000 UTC m=+1041.310976992" lastFinishedPulling="2025-10-11 07:57:33.830877016 +0000 UTC m=+1041.731332962" observedRunningTime="2025-10-11 07:57:34.562003812 +0000 UTC m=+1042.462459758" watchObservedRunningTime="2025-10-11 07:57:34.567061622 +0000 UTC m=+1042.467517568" Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.659219 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3e52-account-create-5w9qv"] Oct 11 07:57:34 crc kubenswrapper[5016]: I1011 07:57:34.934690 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6797-account-create-5xbpq"] Oct 11 07:57:34 crc kubenswrapper[5016]: E1011 07:57:34.963025 5016 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/59819344fce48f64a713750268479e6cd255f66b918c39417b0da095a16ea300/diff" to get inode usage: stat /var/lib/containers/storage/overlay/59819344fce48f64a713750268479e6cd255f66b918c39417b0da095a16ea300/diff: no such file or directory, extraDiskErr: Oct 11 07:57:35 crc kubenswrapper[5016]: I1011 07:57:35.147440 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc0b78e-f920-4af6-901f-ef0d92d9b046" path="/var/lib/kubelet/pods/0bc0b78e-f920-4af6-901f-ef0d92d9b046/volumes" Oct 11 07:57:35 crc kubenswrapper[5016]: I1011 07:57:35.545392 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerStarted","Data":"83b1335f8c3efb1871920b42132e7e956407c33e0ecda01c5c9f41ee83c9998b"} Oct 11 07:57:35 crc kubenswrapper[5016]: I1011 07:57:35.547584 5016 generic.go:334] "Generic (PLEG): container finished" podID="a83ef869-3d57-4f37-aba9-d279183b0413" containerID="f870a55d476bfe362ca525b987a4d3406cba4daa6a9c55382d5ec124e28cba7c" exitCode=0 Oct 11 07:57:35 crc kubenswrapper[5016]: I1011 07:57:35.547640 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e52-account-create-5w9qv" event={"ID":"a83ef869-3d57-4f37-aba9-d279183b0413","Type":"ContainerDied","Data":"f870a55d476bfe362ca525b987a4d3406cba4daa6a9c55382d5ec124e28cba7c"} Oct 11 07:57:35 crc kubenswrapper[5016]: I1011 07:57:35.547682 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e52-account-create-5w9qv" event={"ID":"a83ef869-3d57-4f37-aba9-d279183b0413","Type":"ContainerStarted","Data":"2e84411b4bf781a7807c87e43c8012cc6ec12917fe0f870ab7952b0ca5b3d5f0"} Oct 11 07:57:35 crc kubenswrapper[5016]: I1011 07:57:35.549435 5016 generic.go:334] "Generic (PLEG): container finished" podID="f75907a7-0421-4c6e-8cf9-d196d3c8c0e6" containerID="c906c1fa96f4259b063c1ce09ea48f3b96193304c561ef6df385533f58ab16dc" exitCode=0 Oct 11 07:57:35 crc kubenswrapper[5016]: I1011 07:57:35.550369 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6797-account-create-5xbpq" event={"ID":"f75907a7-0421-4c6e-8cf9-d196d3c8c0e6","Type":"ContainerDied","Data":"c906c1fa96f4259b063c1ce09ea48f3b96193304c561ef6df385533f58ab16dc"} Oct 11 07:57:35 crc kubenswrapper[5016]: I1011 07:57:35.550403 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6797-account-create-5xbpq" event={"ID":"f75907a7-0421-4c6e-8cf9-d196d3c8c0e6","Type":"ContainerStarted","Data":"459b739b3c5de820dda1dc89f15c0391ad782107e353523f4a595f1c95af8149"} Oct 11 07:57:36 crc kubenswrapper[5016]: W1011 07:57:36.152224 5016 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fcfe522_4dc0_41a6_b29d_75f00142585e.slice/crio-150ea039023a37f879adb30746da7e608610712e45a95a0f41dc819e296a338b.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fcfe522_4dc0_41a6_b29d_75f00142585e.slice/crio-150ea039023a37f879adb30746da7e608610712e45a95a0f41dc819e296a338b.scope: no such file or directory Oct 11 07:57:36 crc kubenswrapper[5016]: W1011 07:57:36.152402 5016 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc70d5c8e_2545_45f9_ba1a_f4f1755f3729.slice/crio-ce0c395ae41b525744388af759444b993bd85d3f1cb9062c13c419f18e0175b5": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc70d5c8e_2545_45f9_ba1a_f4f1755f3729.slice/crio-ce0c395ae41b525744388af759444b993bd85d3f1cb9062c13c419f18e0175b5: no such file or directory Oct 11 07:57:36 crc kubenswrapper[5016]: W1011 07:57:36.152420 5016 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc4a31f9_083f_49bf_866a_b6b970910e4d.slice/crio-conmon-87be5ce31d3e28f5815f05515ba4b51ae80935358f78bc6fb1e44c922f4c4073.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc4a31f9_083f_49bf_866a_b6b970910e4d.slice/crio-conmon-87be5ce31d3e28f5815f05515ba4b51ae80935358f78bc6fb1e44c922f4c4073.scope: no such file or directory Oct 11 07:57:36 crc kubenswrapper[5016]: W1011 07:57:36.152436 5016 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc4a31f9_083f_49bf_866a_b6b970910e4d.slice/crio-87be5ce31d3e28f5815f05515ba4b51ae80935358f78bc6fb1e44c922f4c4073.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc4a31f9_083f_49bf_866a_b6b970910e4d.slice/crio-87be5ce31d3e28f5815f05515ba4b51ae80935358f78bc6fb1e44c922f4c4073.scope: no such file or directory Oct 11 07:57:36 crc kubenswrapper[5016]: W1011 07:57:36.152459 5016 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc70d5c8e_2545_45f9_ba1a_f4f1755f3729.slice/crio-conmon-d64ba490de862b1225ef273b9eb6996c4d19b9b0ff4e9fa0247d7f8bf6064bef.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc70d5c8e_2545_45f9_ba1a_f4f1755f3729.slice/crio-conmon-d64ba490de862b1225ef273b9eb6996c4d19b9b0ff4e9fa0247d7f8bf6064bef.scope: no such file or directory Oct 11 07:57:36 crc kubenswrapper[5016]: W1011 07:57:36.152482 5016 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc70d5c8e_2545_45f9_ba1a_f4f1755f3729.slice/crio-d64ba490de862b1225ef273b9eb6996c4d19b9b0ff4e9fa0247d7f8bf6064bef.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc70d5c8e_2545_45f9_ba1a_f4f1755f3729.slice/crio-d64ba490de862b1225ef273b9eb6996c4d19b9b0ff4e9fa0247d7f8bf6064bef.scope: no such file or directory Oct 11 07:57:36 crc kubenswrapper[5016]: E1011 07:57:36.401098 5016 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod771aebc7_25b0_45ef_bbd4_ed6c367b998b.slice/crio-88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod771aebc7_25b0_45ef_bbd4_ed6c367b998b.slice/crio-conmon-88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf.scope\": RecentStats: unable to find data in memory cache]" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.429459 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-df49866-g5nkl" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.494086 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-scripts\") pod \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.494824 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zrhr\" (UniqueName: \"kubernetes.io/projected/771aebc7-25b0-45ef-bbd4-ed6c367b998b-kube-api-access-6zrhr\") pod \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.494959 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-combined-ca-bundle\") pod \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.495127 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-config-data\") pod \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.495260 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-tls-certs\") pod \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.495362 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/771aebc7-25b0-45ef-bbd4-ed6c367b998b-logs\") pod \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.495475 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-secret-key\") pod \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\" (UID: \"771aebc7-25b0-45ef-bbd4-ed6c367b998b\") " Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.497192 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/771aebc7-25b0-45ef-bbd4-ed6c367b998b-logs" (OuterVolumeSpecName: "logs") pod "771aebc7-25b0-45ef-bbd4-ed6c367b998b" (UID: "771aebc7-25b0-45ef-bbd4-ed6c367b998b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.497372 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/771aebc7-25b0-45ef-bbd4-ed6c367b998b-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.502820 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771aebc7-25b0-45ef-bbd4-ed6c367b998b-kube-api-access-6zrhr" (OuterVolumeSpecName: "kube-api-access-6zrhr") pod "771aebc7-25b0-45ef-bbd4-ed6c367b998b" (UID: "771aebc7-25b0-45ef-bbd4-ed6c367b998b"). InnerVolumeSpecName "kube-api-access-6zrhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.504191 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "771aebc7-25b0-45ef-bbd4-ed6c367b998b" (UID: "771aebc7-25b0-45ef-bbd4-ed6c367b998b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.526862 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-scripts" (OuterVolumeSpecName: "scripts") pod "771aebc7-25b0-45ef-bbd4-ed6c367b998b" (UID: "771aebc7-25b0-45ef-bbd4-ed6c367b998b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.529346 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-config-data" (OuterVolumeSpecName: "config-data") pod "771aebc7-25b0-45ef-bbd4-ed6c367b998b" (UID: "771aebc7-25b0-45ef-bbd4-ed6c367b998b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.548153 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "771aebc7-25b0-45ef-bbd4-ed6c367b998b" (UID: "771aebc7-25b0-45ef-bbd4-ed6c367b998b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.561676 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "771aebc7-25b0-45ef-bbd4-ed6c367b998b" (UID: "771aebc7-25b0-45ef-bbd4-ed6c367b998b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.565911 5016 generic.go:334] "Generic (PLEG): container finished" podID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerID="88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf" exitCode=137 Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.565971 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-df49866-g5nkl" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.566020 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-df49866-g5nkl" event={"ID":"771aebc7-25b0-45ef-bbd4-ed6c367b998b","Type":"ContainerDied","Data":"88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf"} Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.566109 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-df49866-g5nkl" event={"ID":"771aebc7-25b0-45ef-bbd4-ed6c367b998b","Type":"ContainerDied","Data":"c326779a89292a9eae3d03eff337c85b93a2a045888e097239d42014506ce0fd"} Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.566165 5016 scope.go:117] "RemoveContainer" containerID="0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.571556 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerStarted","Data":"56b155546da452b1e84ae072a44fbbcb51e82d33262fb3a8f1d1213fe13c8de4"} Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.598538 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zrhr\" (UniqueName: \"kubernetes.io/projected/771aebc7-25b0-45ef-bbd4-ed6c367b998b-kube-api-access-6zrhr\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.598867 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.598877 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.598887 5016 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.598896 5016 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/771aebc7-25b0-45ef-bbd4-ed6c367b998b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.598905 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/771aebc7-25b0-45ef-bbd4-ed6c367b998b-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.707900 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-df49866-g5nkl"] Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.715377 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-df49866-g5nkl"] Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.852181 5016 scope.go:117] "RemoveContainer" containerID="88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.919163 5016 scope.go:117] "RemoveContainer" containerID="0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6" Oct 11 07:57:36 crc kubenswrapper[5016]: E1011 07:57:36.919684 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6\": container with ID starting with 0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6 not found: ID does not exist" containerID="0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.919735 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6"} err="failed to get container status \"0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6\": rpc error: code = NotFound desc = could not find container \"0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6\": container with ID starting with 0fcda72eca430ed7251bbba4b88112df6de9debb48a9a93bd9030d94bd84e9f6 not found: ID does not exist" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.919771 5016 scope.go:117] "RemoveContainer" containerID="88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf" Oct 11 07:57:36 crc kubenswrapper[5016]: E1011 07:57:36.925238 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf\": container with ID starting with 88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf not found: ID does not exist" containerID="88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.925292 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf"} err="failed to get container status \"88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf\": rpc error: code = NotFound desc = could not find container \"88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf\": container with ID starting with 88a3fa4b2d02d92df9c385aedaee1f94d3f98a5a70e070da060f674d8839c7cf not found: ID does not exist" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.943720 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6797-account-create-5xbpq" Oct 11 07:57:36 crc kubenswrapper[5016]: I1011 07:57:36.992728 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e52-account-create-5w9qv" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.009188 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7mxs\" (UniqueName: \"kubernetes.io/projected/f75907a7-0421-4c6e-8cf9-d196d3c8c0e6-kube-api-access-g7mxs\") pod \"f75907a7-0421-4c6e-8cf9-d196d3c8c0e6\" (UID: \"f75907a7-0421-4c6e-8cf9-d196d3c8c0e6\") " Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.013999 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f75907a7-0421-4c6e-8cf9-d196d3c8c0e6-kube-api-access-g7mxs" (OuterVolumeSpecName: "kube-api-access-g7mxs") pod "f75907a7-0421-4c6e-8cf9-d196d3c8c0e6" (UID: "f75907a7-0421-4c6e-8cf9-d196d3c8c0e6"). InnerVolumeSpecName "kube-api-access-g7mxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.110453 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfhb4\" (UniqueName: \"kubernetes.io/projected/a83ef869-3d57-4f37-aba9-d279183b0413-kube-api-access-jfhb4\") pod \"a83ef869-3d57-4f37-aba9-d279183b0413\" (UID: \"a83ef869-3d57-4f37-aba9-d279183b0413\") " Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.111621 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7mxs\" (UniqueName: \"kubernetes.io/projected/f75907a7-0421-4c6e-8cf9-d196d3c8c0e6-kube-api-access-g7mxs\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.113813 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83ef869-3d57-4f37-aba9-d279183b0413-kube-api-access-jfhb4" (OuterVolumeSpecName: "kube-api-access-jfhb4") pod "a83ef869-3d57-4f37-aba9-d279183b0413" (UID: "a83ef869-3d57-4f37-aba9-d279183b0413"). InnerVolumeSpecName "kube-api-access-jfhb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.122098 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.122162 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.145022 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" path="/var/lib/kubelet/pods/771aebc7-25b0-45ef-bbd4-ed6c367b998b/volumes" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.213777 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfhb4\" (UniqueName: \"kubernetes.io/projected/a83ef869-3d57-4f37-aba9-d279183b0413-kube-api-access-jfhb4\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.582006 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerStarted","Data":"c297ee4e4733b17257389ff9a17848964a2f7fffd898231d8d6aec2eeeb5ecc0"} Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.586228 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e52-account-create-5w9qv" event={"ID":"a83ef869-3d57-4f37-aba9-d279183b0413","Type":"ContainerDied","Data":"2e84411b4bf781a7807c87e43c8012cc6ec12917fe0f870ab7952b0ca5b3d5f0"} Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.586275 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e84411b4bf781a7807c87e43c8012cc6ec12917fe0f870ab7952b0ca5b3d5f0" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.586357 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e52-account-create-5w9qv" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.589465 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6797-account-create-5xbpq" event={"ID":"f75907a7-0421-4c6e-8cf9-d196d3c8c0e6","Type":"ContainerDied","Data":"459b739b3c5de820dda1dc89f15c0391ad782107e353523f4a595f1c95af8149"} Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.589503 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="459b739b3c5de820dda1dc89f15c0391ad782107e353523f4a595f1c95af8149" Oct 11 07:57:37 crc kubenswrapper[5016]: I1011 07:57:37.589590 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6797-account-create-5xbpq" Oct 11 07:57:38 crc kubenswrapper[5016]: I1011 07:57:38.620444 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerStarted","Data":"cf7e26da527b51469017259831146a66c5c825597b5aaccfd4d55b2a4cabd501"} Oct 11 07:57:38 crc kubenswrapper[5016]: I1011 07:57:38.620932 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 07:57:38 crc kubenswrapper[5016]: I1011 07:57:38.656602 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.146696051 podStartE2EDuration="5.656581447s" podCreationTimestamp="2025-10-11 07:57:33 +0000 UTC" firstStartedPulling="2025-10-11 07:57:34.390486329 +0000 UTC m=+1042.290942275" lastFinishedPulling="2025-10-11 07:57:37.900371705 +0000 UTC m=+1045.800827671" observedRunningTime="2025-10-11 07:57:38.650329225 +0000 UTC m=+1046.550785191" watchObservedRunningTime="2025-10-11 07:57:38.656581447 +0000 UTC m=+1046.557037393" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.300253 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mzxx"] Oct 11 07:57:39 crc kubenswrapper[5016]: E1011 07:57:39.300587 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.300598 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon" Oct 11 07:57:39 crc kubenswrapper[5016]: E1011 07:57:39.300613 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon-log" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.300619 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon-log" Oct 11 07:57:39 crc kubenswrapper[5016]: E1011 07:57:39.300634 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83ef869-3d57-4f37-aba9-d279183b0413" containerName="mariadb-account-create" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.300640 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83ef869-3d57-4f37-aba9-d279183b0413" containerName="mariadb-account-create" Oct 11 07:57:39 crc kubenswrapper[5016]: E1011 07:57:39.300675 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f75907a7-0421-4c6e-8cf9-d196d3c8c0e6" containerName="mariadb-account-create" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.300681 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f75907a7-0421-4c6e-8cf9-d196d3c8c0e6" containerName="mariadb-account-create" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.300835 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83ef869-3d57-4f37-aba9-d279183b0413" containerName="mariadb-account-create" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.300849 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f75907a7-0421-4c6e-8cf9-d196d3c8c0e6" containerName="mariadb-account-create" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.300861 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.300872 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="771aebc7-25b0-45ef-bbd4-ed6c367b998b" containerName="horizon-log" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.301418 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.304977 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.305372 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-sv8xn" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.307206 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.315893 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mzxx"] Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.350529 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-config-data\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.350624 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.350695 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-scripts\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.350724 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8rkj\" (UniqueName: \"kubernetes.io/projected/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-kube-api-access-g8rkj\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.451975 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.452047 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-scripts\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.452077 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8rkj\" (UniqueName: \"kubernetes.io/projected/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-kube-api-access-g8rkj\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.452148 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-config-data\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.459407 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-scripts\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.459413 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.461313 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-config-data\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.467212 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8rkj\" (UniqueName: \"kubernetes.io/projected/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-kube-api-access-g8rkj\") pod \"nova-cell0-conductor-db-sync-2mzxx\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:39 crc kubenswrapper[5016]: I1011 07:57:39.619597 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:57:40 crc kubenswrapper[5016]: I1011 07:57:40.136315 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mzxx"] Oct 11 07:57:40 crc kubenswrapper[5016]: I1011 07:57:40.665397 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2mzxx" event={"ID":"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659","Type":"ContainerStarted","Data":"e98661910afdfa3e40d897a4c30aa07b6a9ab5fc83290557cf6f2b64d0749059"} Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.229949 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.230684 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="ceilometer-central-agent" containerID="cri-o://83b1335f8c3efb1871920b42132e7e956407c33e0ecda01c5c9f41ee83c9998b" gracePeriod=30 Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.230750 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="sg-core" containerID="cri-o://c297ee4e4733b17257389ff9a17848964a2f7fffd898231d8d6aec2eeeb5ecc0" gracePeriod=30 Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.230786 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="proxy-httpd" containerID="cri-o://cf7e26da527b51469017259831146a66c5c825597b5aaccfd4d55b2a4cabd501" gracePeriod=30 Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.230861 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="ceilometer-notification-agent" containerID="cri-o://56b155546da452b1e84ae072a44fbbcb51e82d33262fb3a8f1d1213fe13c8de4" gracePeriod=30 Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.686318 5016 generic.go:334] "Generic (PLEG): container finished" podID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerID="cf7e26da527b51469017259831146a66c5c825597b5aaccfd4d55b2a4cabd501" exitCode=0 Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.686632 5016 generic.go:334] "Generic (PLEG): container finished" podID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerID="c297ee4e4733b17257389ff9a17848964a2f7fffd898231d8d6aec2eeeb5ecc0" exitCode=2 Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.686642 5016 generic.go:334] "Generic (PLEG): container finished" podID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerID="56b155546da452b1e84ae072a44fbbcb51e82d33262fb3a8f1d1213fe13c8de4" exitCode=0 Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.686667 5016 generic.go:334] "Generic (PLEG): container finished" podID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerID="83b1335f8c3efb1871920b42132e7e956407c33e0ecda01c5c9f41ee83c9998b" exitCode=0 Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.686401 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerDied","Data":"cf7e26da527b51469017259831146a66c5c825597b5aaccfd4d55b2a4cabd501"} Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.686724 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerDied","Data":"c297ee4e4733b17257389ff9a17848964a2f7fffd898231d8d6aec2eeeb5ecc0"} Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.686743 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerDied","Data":"56b155546da452b1e84ae072a44fbbcb51e82d33262fb3a8f1d1213fe13c8de4"} Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.686756 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerDied","Data":"83b1335f8c3efb1871920b42132e7e956407c33e0ecda01c5c9f41ee83c9998b"} Oct 11 07:57:42 crc kubenswrapper[5016]: I1011 07:57:42.944107 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 11 07:57:44 crc kubenswrapper[5016]: I1011 07:57:44.222161 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5844-account-create-ln2pj"] Oct 11 07:57:44 crc kubenswrapper[5016]: I1011 07:57:44.237838 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5844-account-create-ln2pj" Oct 11 07:57:44 crc kubenswrapper[5016]: I1011 07:57:44.241294 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 11 07:57:44 crc kubenswrapper[5016]: I1011 07:57:44.243886 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5844-account-create-ln2pj"] Oct 11 07:57:44 crc kubenswrapper[5016]: I1011 07:57:44.347960 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j86x8\" (UniqueName: \"kubernetes.io/projected/c4a1cba4-e094-4311-94b0-f18b957124f0-kube-api-access-j86x8\") pod \"nova-cell1-5844-account-create-ln2pj\" (UID: \"c4a1cba4-e094-4311-94b0-f18b957124f0\") " pod="openstack/nova-cell1-5844-account-create-ln2pj" Oct 11 07:57:44 crc kubenswrapper[5016]: I1011 07:57:44.449250 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j86x8\" (UniqueName: \"kubernetes.io/projected/c4a1cba4-e094-4311-94b0-f18b957124f0-kube-api-access-j86x8\") pod \"nova-cell1-5844-account-create-ln2pj\" (UID: \"c4a1cba4-e094-4311-94b0-f18b957124f0\") " pod="openstack/nova-cell1-5844-account-create-ln2pj" Oct 11 07:57:44 crc kubenswrapper[5016]: I1011 07:57:44.467630 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j86x8\" (UniqueName: \"kubernetes.io/projected/c4a1cba4-e094-4311-94b0-f18b957124f0-kube-api-access-j86x8\") pod \"nova-cell1-5844-account-create-ln2pj\" (UID: \"c4a1cba4-e094-4311-94b0-f18b957124f0\") " pod="openstack/nova-cell1-5844-account-create-ln2pj" Oct 11 07:57:44 crc kubenswrapper[5016]: I1011 07:57:44.561727 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5844-account-create-ln2pj" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.543988 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.600335 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-config-data\") pod \"f9d45776-4422-4a0a-a656-7147b88f6f9b\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.600430 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-ceilometer-tls-certs\") pod \"f9d45776-4422-4a0a-a656-7147b88f6f9b\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.600478 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-combined-ca-bundle\") pod \"f9d45776-4422-4a0a-a656-7147b88f6f9b\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.600533 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-run-httpd\") pod \"f9d45776-4422-4a0a-a656-7147b88f6f9b\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.600610 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-log-httpd\") pod \"f9d45776-4422-4a0a-a656-7147b88f6f9b\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.600634 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-scripts\") pod \"f9d45776-4422-4a0a-a656-7147b88f6f9b\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.600866 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhckl\" (UniqueName: \"kubernetes.io/projected/f9d45776-4422-4a0a-a656-7147b88f6f9b-kube-api-access-qhckl\") pod \"f9d45776-4422-4a0a-a656-7147b88f6f9b\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.600931 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-sg-core-conf-yaml\") pod \"f9d45776-4422-4a0a-a656-7147b88f6f9b\" (UID: \"f9d45776-4422-4a0a-a656-7147b88f6f9b\") " Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.601324 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f9d45776-4422-4a0a-a656-7147b88f6f9b" (UID: "f9d45776-4422-4a0a-a656-7147b88f6f9b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.601632 5016 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.602033 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f9d45776-4422-4a0a-a656-7147b88f6f9b" (UID: "f9d45776-4422-4a0a-a656-7147b88f6f9b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.607807 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d45776-4422-4a0a-a656-7147b88f6f9b-kube-api-access-qhckl" (OuterVolumeSpecName: "kube-api-access-qhckl") pod "f9d45776-4422-4a0a-a656-7147b88f6f9b" (UID: "f9d45776-4422-4a0a-a656-7147b88f6f9b"). InnerVolumeSpecName "kube-api-access-qhckl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.625418 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-scripts" (OuterVolumeSpecName: "scripts") pod "f9d45776-4422-4a0a-a656-7147b88f6f9b" (UID: "f9d45776-4422-4a0a-a656-7147b88f6f9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.653784 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f9d45776-4422-4a0a-a656-7147b88f6f9b" (UID: "f9d45776-4422-4a0a-a656-7147b88f6f9b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.653830 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f9d45776-4422-4a0a-a656-7147b88f6f9b" (UID: "f9d45776-4422-4a0a-a656-7147b88f6f9b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.703331 5016 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.703361 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.703386 5016 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f9d45776-4422-4a0a-a656-7147b88f6f9b-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.703397 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhckl\" (UniqueName: \"kubernetes.io/projected/f9d45776-4422-4a0a-a656-7147b88f6f9b-kube-api-access-qhckl\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.703406 5016 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.707181 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-config-data" (OuterVolumeSpecName: "config-data") pod "f9d45776-4422-4a0a-a656-7147b88f6f9b" (UID: "f9d45776-4422-4a0a-a656-7147b88f6f9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.715753 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9d45776-4422-4a0a-a656-7147b88f6f9b" (UID: "f9d45776-4422-4a0a-a656-7147b88f6f9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.729441 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2mzxx" event={"ID":"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659","Type":"ContainerStarted","Data":"e2974999130b71d3392ad2c3365d847098bf0c0773bc18d93f74c30604011c95"} Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.734904 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.734778 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f9d45776-4422-4a0a-a656-7147b88f6f9b","Type":"ContainerDied","Data":"881ab1e352f42927d04a8aff7d4c9a71c67674e35b08acdf304f0e22582ff880"} Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.735014 5016 scope.go:117] "RemoveContainer" containerID="cf7e26da527b51469017259831146a66c5c825597b5aaccfd4d55b2a4cabd501" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.767515 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-2mzxx" podStartSLOduration=1.515745243 podStartE2EDuration="8.767487218s" podCreationTimestamp="2025-10-11 07:57:39 +0000 UTC" firstStartedPulling="2025-10-11 07:57:40.145744461 +0000 UTC m=+1048.046200397" lastFinishedPulling="2025-10-11 07:57:47.397486426 +0000 UTC m=+1055.297942372" observedRunningTime="2025-10-11 07:57:47.746973598 +0000 UTC m=+1055.647429574" watchObservedRunningTime="2025-10-11 07:57:47.767487218 +0000 UTC m=+1055.667943164" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.772911 5016 scope.go:117] "RemoveContainer" containerID="c297ee4e4733b17257389ff9a17848964a2f7fffd898231d8d6aec2eeeb5ecc0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.777018 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.788890 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.802737 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:47 crc kubenswrapper[5016]: E1011 07:57:47.803196 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="sg-core" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.803221 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="sg-core" Oct 11 07:57:47 crc kubenswrapper[5016]: E1011 07:57:47.803244 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="ceilometer-notification-agent" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.803253 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="ceilometer-notification-agent" Oct 11 07:57:47 crc kubenswrapper[5016]: E1011 07:57:47.803288 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="ceilometer-central-agent" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.803297 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="ceilometer-central-agent" Oct 11 07:57:47 crc kubenswrapper[5016]: E1011 07:57:47.803312 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="proxy-httpd" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.803320 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="proxy-httpd" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.803529 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="proxy-httpd" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.803559 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="sg-core" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.803571 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="ceilometer-central-agent" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.803587 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" containerName="ceilometer-notification-agent" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.806063 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.813983 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.814018 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9d45776-4422-4a0a-a656-7147b88f6f9b-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.815365 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.815468 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.817953 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.818130 5016 scope.go:117] "RemoveContainer" containerID="56b155546da452b1e84ae072a44fbbcb51e82d33262fb3a8f1d1213fe13c8de4" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.841949 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.850834 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5844-account-create-ln2pj"] Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.850915 5016 scope.go:117] "RemoveContainer" containerID="83b1335f8c3efb1871920b42132e7e956407c33e0ecda01c5c9f41ee83c9998b" Oct 11 07:57:47 crc kubenswrapper[5016]: W1011 07:57:47.857632 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4a1cba4_e094_4311_94b0_f18b957124f0.slice/crio-e610f6455fe1d4f1228426160adf02ced5f6586d09c06f3b6998b42d57438eb5 WatchSource:0}: Error finding container e610f6455fe1d4f1228426160adf02ced5f6586d09c06f3b6998b42d57438eb5: Status 404 returned error can't find the container with id e610f6455fe1d4f1228426160adf02ced5f6586d09c06f3b6998b42d57438eb5 Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.915346 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.915570 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.915729 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-log-httpd\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.915781 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.915807 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-config-data\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.915879 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-scripts\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.916118 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncxnn\" (UniqueName: \"kubernetes.io/projected/1eacac67-0843-440c-ae51-210fc298084c-kube-api-access-ncxnn\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:47 crc kubenswrapper[5016]: I1011 07:57:47.916276 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-run-httpd\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.018297 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.018372 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-log-httpd\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.018426 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.018450 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-config-data\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.018489 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-scripts\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.018633 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncxnn\" (UniqueName: \"kubernetes.io/projected/1eacac67-0843-440c-ae51-210fc298084c-kube-api-access-ncxnn\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.018755 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-run-httpd\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.018804 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.019303 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-run-httpd\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.019443 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-log-httpd\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.022380 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-scripts\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.022390 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-config-data\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.022780 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.022780 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.023304 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.038226 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncxnn\" (UniqueName: \"kubernetes.io/projected/1eacac67-0843-440c-ae51-210fc298084c-kube-api-access-ncxnn\") pod \"ceilometer-0\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.130111 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.661009 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:48 crc kubenswrapper[5016]: W1011 07:57:48.676254 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eacac67_0843_440c_ae51_210fc298084c.slice/crio-88eeed5d8104dd92ddf60db5ce6005c793cfc29f44cfa049184ec499e503ea83 WatchSource:0}: Error finding container 88eeed5d8104dd92ddf60db5ce6005c793cfc29f44cfa049184ec499e503ea83: Status 404 returned error can't find the container with id 88eeed5d8104dd92ddf60db5ce6005c793cfc29f44cfa049184ec499e503ea83 Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.742854 5016 generic.go:334] "Generic (PLEG): container finished" podID="c4a1cba4-e094-4311-94b0-f18b957124f0" containerID="9269ccc4c0fbaaff91a9bf30300f9335bf9b34e81ec30991cfd0af010a5dbab9" exitCode=0 Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.742922 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5844-account-create-ln2pj" event={"ID":"c4a1cba4-e094-4311-94b0-f18b957124f0","Type":"ContainerDied","Data":"9269ccc4c0fbaaff91a9bf30300f9335bf9b34e81ec30991cfd0af010a5dbab9"} Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.742948 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5844-account-create-ln2pj" event={"ID":"c4a1cba4-e094-4311-94b0-f18b957124f0","Type":"ContainerStarted","Data":"e610f6455fe1d4f1228426160adf02ced5f6586d09c06f3b6998b42d57438eb5"} Oct 11 07:57:48 crc kubenswrapper[5016]: I1011 07:57:48.743832 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerStarted","Data":"88eeed5d8104dd92ddf60db5ce6005c793cfc29f44cfa049184ec499e503ea83"} Oct 11 07:57:49 crc kubenswrapper[5016]: I1011 07:57:49.144565 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d45776-4422-4a0a-a656-7147b88f6f9b" path="/var/lib/kubelet/pods/f9d45776-4422-4a0a-a656-7147b88f6f9b/volumes" Oct 11 07:57:49 crc kubenswrapper[5016]: I1011 07:57:49.754225 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerStarted","Data":"9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8"} Oct 11 07:57:50 crc kubenswrapper[5016]: I1011 07:57:50.087877 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5844-account-create-ln2pj" Oct 11 07:57:50 crc kubenswrapper[5016]: I1011 07:57:50.160448 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j86x8\" (UniqueName: \"kubernetes.io/projected/c4a1cba4-e094-4311-94b0-f18b957124f0-kube-api-access-j86x8\") pod \"c4a1cba4-e094-4311-94b0-f18b957124f0\" (UID: \"c4a1cba4-e094-4311-94b0-f18b957124f0\") " Oct 11 07:57:50 crc kubenswrapper[5016]: I1011 07:57:50.166082 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a1cba4-e094-4311-94b0-f18b957124f0-kube-api-access-j86x8" (OuterVolumeSpecName: "kube-api-access-j86x8") pod "c4a1cba4-e094-4311-94b0-f18b957124f0" (UID: "c4a1cba4-e094-4311-94b0-f18b957124f0"). InnerVolumeSpecName "kube-api-access-j86x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:57:50 crc kubenswrapper[5016]: I1011 07:57:50.262623 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j86x8\" (UniqueName: \"kubernetes.io/projected/c4a1cba4-e094-4311-94b0-f18b957124f0-kube-api-access-j86x8\") on node \"crc\" DevicePath \"\"" Oct 11 07:57:50 crc kubenswrapper[5016]: I1011 07:57:50.764746 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5844-account-create-ln2pj" event={"ID":"c4a1cba4-e094-4311-94b0-f18b957124f0","Type":"ContainerDied","Data":"e610f6455fe1d4f1228426160adf02ced5f6586d09c06f3b6998b42d57438eb5"} Oct 11 07:57:50 crc kubenswrapper[5016]: I1011 07:57:50.764786 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e610f6455fe1d4f1228426160adf02ced5f6586d09c06f3b6998b42d57438eb5" Oct 11 07:57:50 crc kubenswrapper[5016]: I1011 07:57:50.764851 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5844-account-create-ln2pj" Oct 11 07:57:50 crc kubenswrapper[5016]: I1011 07:57:50.770115 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerStarted","Data":"fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455"} Oct 11 07:57:54 crc kubenswrapper[5016]: I1011 07:57:54.817017 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerStarted","Data":"3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6"} Oct 11 07:57:56 crc kubenswrapper[5016]: I1011 07:57:56.835133 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerStarted","Data":"e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b"} Oct 11 07:57:56 crc kubenswrapper[5016]: I1011 07:57:56.835517 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 07:57:56 crc kubenswrapper[5016]: I1011 07:57:56.869130 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.328061774 podStartE2EDuration="9.86910337s" podCreationTimestamp="2025-10-11 07:57:47 +0000 UTC" firstStartedPulling="2025-10-11 07:57:48.678749568 +0000 UTC m=+1056.579205514" lastFinishedPulling="2025-10-11 07:57:56.219791164 +0000 UTC m=+1064.120247110" observedRunningTime="2025-10-11 07:57:56.86214781 +0000 UTC m=+1064.762603776" watchObservedRunningTime="2025-10-11 07:57:56.86910337 +0000 UTC m=+1064.769559316" Oct 11 07:57:56 crc kubenswrapper[5016]: I1011 07:57:56.897344 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:57:58 crc kubenswrapper[5016]: I1011 07:57:58.859008 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="ceilometer-central-agent" containerID="cri-o://9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8" gracePeriod=30 Oct 11 07:57:58 crc kubenswrapper[5016]: I1011 07:57:58.859042 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="proxy-httpd" containerID="cri-o://e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b" gracePeriod=30 Oct 11 07:57:58 crc kubenswrapper[5016]: I1011 07:57:58.859084 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="ceilometer-notification-agent" containerID="cri-o://fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455" gracePeriod=30 Oct 11 07:57:58 crc kubenswrapper[5016]: I1011 07:57:58.859110 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="sg-core" containerID="cri-o://3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6" gracePeriod=30 Oct 11 07:57:59 crc kubenswrapper[5016]: I1011 07:57:59.873795 5016 generic.go:334] "Generic (PLEG): container finished" podID="1eacac67-0843-440c-ae51-210fc298084c" containerID="e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b" exitCode=0 Oct 11 07:57:59 crc kubenswrapper[5016]: I1011 07:57:59.873859 5016 generic.go:334] "Generic (PLEG): container finished" podID="1eacac67-0843-440c-ae51-210fc298084c" containerID="3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6" exitCode=2 Oct 11 07:57:59 crc kubenswrapper[5016]: I1011 07:57:59.873881 5016 generic.go:334] "Generic (PLEG): container finished" podID="1eacac67-0843-440c-ae51-210fc298084c" containerID="9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8" exitCode=0 Oct 11 07:57:59 crc kubenswrapper[5016]: I1011 07:57:59.873927 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerDied","Data":"e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b"} Oct 11 07:57:59 crc kubenswrapper[5016]: I1011 07:57:59.873990 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerDied","Data":"3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6"} Oct 11 07:57:59 crc kubenswrapper[5016]: I1011 07:57:59.874019 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerDied","Data":"9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8"} Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.248597 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.442073 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-combined-ca-bundle\") pod \"1eacac67-0843-440c-ae51-210fc298084c\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.442231 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-ceilometer-tls-certs\") pod \"1eacac67-0843-440c-ae51-210fc298084c\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.442270 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-config-data\") pod \"1eacac67-0843-440c-ae51-210fc298084c\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.442315 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-sg-core-conf-yaml\") pod \"1eacac67-0843-440c-ae51-210fc298084c\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.442379 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-run-httpd\") pod \"1eacac67-0843-440c-ae51-210fc298084c\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.442429 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-log-httpd\") pod \"1eacac67-0843-440c-ae51-210fc298084c\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.442492 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-scripts\") pod \"1eacac67-0843-440c-ae51-210fc298084c\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.442522 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncxnn\" (UniqueName: \"kubernetes.io/projected/1eacac67-0843-440c-ae51-210fc298084c-kube-api-access-ncxnn\") pod \"1eacac67-0843-440c-ae51-210fc298084c\" (UID: \"1eacac67-0843-440c-ae51-210fc298084c\") " Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.443298 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1eacac67-0843-440c-ae51-210fc298084c" (UID: "1eacac67-0843-440c-ae51-210fc298084c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.444893 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1eacac67-0843-440c-ae51-210fc298084c" (UID: "1eacac67-0843-440c-ae51-210fc298084c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.448600 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eacac67-0843-440c-ae51-210fc298084c-kube-api-access-ncxnn" (OuterVolumeSpecName: "kube-api-access-ncxnn") pod "1eacac67-0843-440c-ae51-210fc298084c" (UID: "1eacac67-0843-440c-ae51-210fc298084c"). InnerVolumeSpecName "kube-api-access-ncxnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.458019 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-scripts" (OuterVolumeSpecName: "scripts") pod "1eacac67-0843-440c-ae51-210fc298084c" (UID: "1eacac67-0843-440c-ae51-210fc298084c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.474501 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1eacac67-0843-440c-ae51-210fc298084c" (UID: "1eacac67-0843-440c-ae51-210fc298084c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.490112 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1eacac67-0843-440c-ae51-210fc298084c" (UID: "1eacac67-0843-440c-ae51-210fc298084c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.520327 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eacac67-0843-440c-ae51-210fc298084c" (UID: "1eacac67-0843-440c-ae51-210fc298084c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.545959 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.546443 5016 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.546595 5016 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.546738 5016 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.546858 5016 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1eacac67-0843-440c-ae51-210fc298084c-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.546993 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.547079 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncxnn\" (UniqueName: \"kubernetes.io/projected/1eacac67-0843-440c-ae51-210fc298084c-kube-api-access-ncxnn\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.557922 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-config-data" (OuterVolumeSpecName: "config-data") pod "1eacac67-0843-440c-ae51-210fc298084c" (UID: "1eacac67-0843-440c-ae51-210fc298084c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.648799 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eacac67-0843-440c-ae51-210fc298084c-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.886085 5016 generic.go:334] "Generic (PLEG): container finished" podID="1eacac67-0843-440c-ae51-210fc298084c" containerID="fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455" exitCode=0 Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.886133 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerDied","Data":"fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455"} Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.886159 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1eacac67-0843-440c-ae51-210fc298084c","Type":"ContainerDied","Data":"88eeed5d8104dd92ddf60db5ce6005c793cfc29f44cfa049184ec499e503ea83"} Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.886186 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.886189 5016 scope.go:117] "RemoveContainer" containerID="e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.926998 5016 scope.go:117] "RemoveContainer" containerID="3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.940476 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.953630 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.970089 5016 scope.go:117] "RemoveContainer" containerID="fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.974600 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:00 crc kubenswrapper[5016]: E1011 07:58:00.975177 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a1cba4-e094-4311-94b0-f18b957124f0" containerName="mariadb-account-create" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.975214 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a1cba4-e094-4311-94b0-f18b957124f0" containerName="mariadb-account-create" Oct 11 07:58:00 crc kubenswrapper[5016]: E1011 07:58:00.975249 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="ceilometer-notification-agent" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.975274 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="ceilometer-notification-agent" Oct 11 07:58:00 crc kubenswrapper[5016]: E1011 07:58:00.975358 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="proxy-httpd" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.975379 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="proxy-httpd" Oct 11 07:58:00 crc kubenswrapper[5016]: E1011 07:58:00.975415 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="sg-core" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.975433 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="sg-core" Oct 11 07:58:00 crc kubenswrapper[5016]: E1011 07:58:00.975463 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="ceilometer-central-agent" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.975481 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="ceilometer-central-agent" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.977423 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a1cba4-e094-4311-94b0-f18b957124f0" containerName="mariadb-account-create" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.977472 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="ceilometer-central-agent" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.977698 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="sg-core" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.977763 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="ceilometer-notification-agent" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.978892 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eacac67-0843-440c-ae51-210fc298084c" containerName="proxy-httpd" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.983495 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.985739 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.985954 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.986173 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.986333 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 07:58:00 crc kubenswrapper[5016]: I1011 07:58:00.999707 5016 scope.go:117] "RemoveContainer" containerID="9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.030459 5016 scope.go:117] "RemoveContainer" containerID="e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b" Oct 11 07:58:01 crc kubenswrapper[5016]: E1011 07:58:01.030897 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b\": container with ID starting with e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b not found: ID does not exist" containerID="e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.030935 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b"} err="failed to get container status \"e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b\": rpc error: code = NotFound desc = could not find container \"e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b\": container with ID starting with e669af405aefc3cfc3adcc83a32c4d6e61431b1350789f6d21b4ba4b71682b7b not found: ID does not exist" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.030959 5016 scope.go:117] "RemoveContainer" containerID="3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6" Oct 11 07:58:01 crc kubenswrapper[5016]: E1011 07:58:01.031215 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6\": container with ID starting with 3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6 not found: ID does not exist" containerID="3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.031241 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6"} err="failed to get container status \"3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6\": rpc error: code = NotFound desc = could not find container \"3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6\": container with ID starting with 3364c58e69936f9dd87208e53e694a7b9ac970e9493db7801b2ff8d2a9edb4d6 not found: ID does not exist" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.031258 5016 scope.go:117] "RemoveContainer" containerID="fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455" Oct 11 07:58:01 crc kubenswrapper[5016]: E1011 07:58:01.031533 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455\": container with ID starting with fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455 not found: ID does not exist" containerID="fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.031573 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455"} err="failed to get container status \"fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455\": rpc error: code = NotFound desc = could not find container \"fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455\": container with ID starting with fdf851ad23a02a93b245042bbe4fbca0e016177b9da6689924cdd2db0cc05455 not found: ID does not exist" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.031599 5016 scope.go:117] "RemoveContainer" containerID="9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8" Oct 11 07:58:01 crc kubenswrapper[5016]: E1011 07:58:01.031939 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8\": container with ID starting with 9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8 not found: ID does not exist" containerID="9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.031976 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8"} err="failed to get container status \"9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8\": rpc error: code = NotFound desc = could not find container \"9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8\": container with ID starting with 9c7741c4ab45744d4a719c644c587506d143ee252679a82bf2d47df5bc52a5f8 not found: ID does not exist" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.056080 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-log-httpd\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.056117 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-scripts\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.056138 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-run-httpd\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.056316 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtpg4\" (UniqueName: \"kubernetes.io/projected/c7b65d12-aa4e-439c-8c40-af327ebe8c88-kube-api-access-jtpg4\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.056503 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.056767 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.056944 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-config-data\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.057078 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.148527 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eacac67-0843-440c-ae51-210fc298084c" path="/var/lib/kubelet/pods/1eacac67-0843-440c-ae51-210fc298084c/volumes" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.158566 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.158640 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.158698 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-config-data\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.158726 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.158802 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-log-httpd\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.158829 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-scripts\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.158848 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-run-httpd\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.158864 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtpg4\" (UniqueName: \"kubernetes.io/projected/c7b65d12-aa4e-439c-8c40-af327ebe8c88-kube-api-access-jtpg4\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.160305 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-log-httpd\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.161440 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-run-httpd\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.165012 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.165479 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-scripts\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.165825 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-config-data\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.166319 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.175080 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.182318 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtpg4\" (UniqueName: \"kubernetes.io/projected/c7b65d12-aa4e-439c-8c40-af327ebe8c88-kube-api-access-jtpg4\") pod \"ceilometer-0\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.336802 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.790369 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:01 crc kubenswrapper[5016]: I1011 07:58:01.896103 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerStarted","Data":"2d9fec95a60fc8dd0b823d1ca3ffc0fdc35fc94b440b86d80c3497e0a647acd0"} Oct 11 07:58:02 crc kubenswrapper[5016]: I1011 07:58:02.906129 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerStarted","Data":"47a583747ef098ea4ce6ca00d3df90db1342290b40a692c439485d2d910431d9"} Oct 11 07:58:03 crc kubenswrapper[5016]: I1011 07:58:03.919501 5016 generic.go:334] "Generic (PLEG): container finished" podID="cd0b12ea-b33a-4421-bdd6-3bbbb2fca659" containerID="e2974999130b71d3392ad2c3365d847098bf0c0773bc18d93f74c30604011c95" exitCode=0 Oct 11 07:58:03 crc kubenswrapper[5016]: I1011 07:58:03.919634 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2mzxx" event={"ID":"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659","Type":"ContainerDied","Data":"e2974999130b71d3392ad2c3365d847098bf0c0773bc18d93f74c30604011c95"} Oct 11 07:58:03 crc kubenswrapper[5016]: I1011 07:58:03.924003 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerStarted","Data":"0e7c731f38fa174e86ba1610cae56ba301bca4f65b5d16008ee3cbcdcaee4544"} Oct 11 07:58:03 crc kubenswrapper[5016]: I1011 07:58:03.924032 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerStarted","Data":"17f9c9a3fe9f8aa9babcdeba9bd87e7da27c3f298176db8dccf7691277d213e0"} Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.394277 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.444426 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8rkj\" (UniqueName: \"kubernetes.io/projected/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-kube-api-access-g8rkj\") pod \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.444719 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-config-data\") pod \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.444855 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-scripts\") pod \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.444930 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-combined-ca-bundle\") pod \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\" (UID: \"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659\") " Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.449981 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-scripts" (OuterVolumeSpecName: "scripts") pod "cd0b12ea-b33a-4421-bdd6-3bbbb2fca659" (UID: "cd0b12ea-b33a-4421-bdd6-3bbbb2fca659"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.450810 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-kube-api-access-g8rkj" (OuterVolumeSpecName: "kube-api-access-g8rkj") pod "cd0b12ea-b33a-4421-bdd6-3bbbb2fca659" (UID: "cd0b12ea-b33a-4421-bdd6-3bbbb2fca659"). InnerVolumeSpecName "kube-api-access-g8rkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.491990 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-config-data" (OuterVolumeSpecName: "config-data") pod "cd0b12ea-b33a-4421-bdd6-3bbbb2fca659" (UID: "cd0b12ea-b33a-4421-bdd6-3bbbb2fca659"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.503797 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd0b12ea-b33a-4421-bdd6-3bbbb2fca659" (UID: "cd0b12ea-b33a-4421-bdd6-3bbbb2fca659"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.547059 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.547095 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.547111 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8rkj\" (UniqueName: \"kubernetes.io/projected/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-kube-api-access-g8rkj\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.547123 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.944156 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerStarted","Data":"d903b91eb5574e286cc335c6052d30e5b03a6bd2c678c0775d5e4b3f0fb31ec0"} Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.945078 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.948174 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-2mzxx" event={"ID":"cd0b12ea-b33a-4421-bdd6-3bbbb2fca659","Type":"ContainerDied","Data":"e98661910afdfa3e40d897a4c30aa07b6a9ab5fc83290557cf6f2b64d0749059"} Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.948200 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e98661910afdfa3e40d897a4c30aa07b6a9ab5fc83290557cf6f2b64d0749059" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.948241 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-2mzxx" Oct 11 07:58:05 crc kubenswrapper[5016]: I1011 07:58:05.986758 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.536577517 podStartE2EDuration="5.986738203s" podCreationTimestamp="2025-10-11 07:58:00 +0000 UTC" firstStartedPulling="2025-10-11 07:58:01.798308248 +0000 UTC m=+1069.698764244" lastFinishedPulling="2025-10-11 07:58:05.248468974 +0000 UTC m=+1073.148924930" observedRunningTime="2025-10-11 07:58:05.984823874 +0000 UTC m=+1073.885279820" watchObservedRunningTime="2025-10-11 07:58:05.986738203 +0000 UTC m=+1073.887194149" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.047115 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 11 07:58:06 crc kubenswrapper[5016]: E1011 07:58:06.047626 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0b12ea-b33a-4421-bdd6-3bbbb2fca659" containerName="nova-cell0-conductor-db-sync" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.047664 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0b12ea-b33a-4421-bdd6-3bbbb2fca659" containerName="nova-cell0-conductor-db-sync" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.047872 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0b12ea-b33a-4421-bdd6-3bbbb2fca659" containerName="nova-cell0-conductor-db-sync" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.048600 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.050858 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.051078 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-sv8xn" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.068696 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.159994 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94518be7-3770-4e6f-8f65-4e955b7bca60-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"94518be7-3770-4e6f-8f65-4e955b7bca60\") " pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.160066 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmwl\" (UniqueName: \"kubernetes.io/projected/94518be7-3770-4e6f-8f65-4e955b7bca60-kube-api-access-glmwl\") pod \"nova-cell0-conductor-0\" (UID: \"94518be7-3770-4e6f-8f65-4e955b7bca60\") " pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.160115 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94518be7-3770-4e6f-8f65-4e955b7bca60-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"94518be7-3770-4e6f-8f65-4e955b7bca60\") " pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.262229 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94518be7-3770-4e6f-8f65-4e955b7bca60-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"94518be7-3770-4e6f-8f65-4e955b7bca60\") " pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.262324 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmwl\" (UniqueName: \"kubernetes.io/projected/94518be7-3770-4e6f-8f65-4e955b7bca60-kube-api-access-glmwl\") pod \"nova-cell0-conductor-0\" (UID: \"94518be7-3770-4e6f-8f65-4e955b7bca60\") " pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.263521 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94518be7-3770-4e6f-8f65-4e955b7bca60-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"94518be7-3770-4e6f-8f65-4e955b7bca60\") " pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.268286 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94518be7-3770-4e6f-8f65-4e955b7bca60-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"94518be7-3770-4e6f-8f65-4e955b7bca60\") " pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.269551 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94518be7-3770-4e6f-8f65-4e955b7bca60-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"94518be7-3770-4e6f-8f65-4e955b7bca60\") " pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.284341 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmwl\" (UniqueName: \"kubernetes.io/projected/94518be7-3770-4e6f-8f65-4e955b7bca60-kube-api-access-glmwl\") pod \"nova-cell0-conductor-0\" (UID: \"94518be7-3770-4e6f-8f65-4e955b7bca60\") " pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.367799 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.828350 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 11 07:58:06 crc kubenswrapper[5016]: W1011 07:58:06.830594 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94518be7_3770_4e6f_8f65_4e955b7bca60.slice/crio-0591d6b23ee395d8d96a960ce3403923f16ab10634a83e1ef7c1d1f8aa2b48d4 WatchSource:0}: Error finding container 0591d6b23ee395d8d96a960ce3403923f16ab10634a83e1ef7c1d1f8aa2b48d4: Status 404 returned error can't find the container with id 0591d6b23ee395d8d96a960ce3403923f16ab10634a83e1ef7c1d1f8aa2b48d4 Oct 11 07:58:06 crc kubenswrapper[5016]: I1011 07:58:06.960242 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"94518be7-3770-4e6f-8f65-4e955b7bca60","Type":"ContainerStarted","Data":"0591d6b23ee395d8d96a960ce3403923f16ab10634a83e1ef7c1d1f8aa2b48d4"} Oct 11 07:58:07 crc kubenswrapper[5016]: I1011 07:58:07.122289 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:58:07 crc kubenswrapper[5016]: I1011 07:58:07.122358 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:58:07 crc kubenswrapper[5016]: I1011 07:58:07.972823 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"94518be7-3770-4e6f-8f65-4e955b7bca60","Type":"ContainerStarted","Data":"70992f9588c8dd114fc118d9e2f5c5b1cfa21551f5cb24f56bc0746ea18aafba"} Oct 11 07:58:07 crc kubenswrapper[5016]: I1011 07:58:07.997481 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.997459879 podStartE2EDuration="1.997459879s" podCreationTimestamp="2025-10-11 07:58:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:07.990208791 +0000 UTC m=+1075.890664757" watchObservedRunningTime="2025-10-11 07:58:07.997459879 +0000 UTC m=+1075.897915845" Oct 11 07:58:08 crc kubenswrapper[5016]: I1011 07:58:08.985959 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.395773 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.837897 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-s2g6b"] Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.839309 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.841845 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.842117 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.851567 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s2g6b"] Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.968874 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.968927 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwsjk\" (UniqueName: \"kubernetes.io/projected/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-kube-api-access-wwsjk\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.968962 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-config-data\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.969058 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-scripts\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.978538 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.980582 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.983080 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 07:58:11 crc kubenswrapper[5016]: I1011 07:58:11.991467 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.069272 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.070178 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwsjk\" (UniqueName: \"kubernetes.io/projected/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-kube-api-access-wwsjk\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.070233 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh68r\" (UniqueName: \"kubernetes.io/projected/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-kube-api-access-hh68r\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.070267 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-config-data\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.070302 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-config-data\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.070402 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.070439 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-scripts\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.070483 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-logs\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.070516 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.071082 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.079311 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-scripts\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.079664 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.081814 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.096764 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.098566 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.099354 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-config-data\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.099377 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwsjk\" (UniqueName: \"kubernetes.io/projected/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-kube-api-access-wwsjk\") pod \"nova-cell0-cell-mapping-s2g6b\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.102725 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.103151 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.133580 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.168169 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.172174 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.172276 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-logs\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.172337 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh68r\" (UniqueName: \"kubernetes.io/projected/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-kube-api-access-hh68r\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.172368 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-config-data\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.178215 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-logs\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.182155 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-config-data\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.188846 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.213912 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh68r\" (UniqueName: \"kubernetes.io/projected/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-kube-api-access-hh68r\") pod \"nova-api-0\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.217342 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.220411 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.233813 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.249337 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.267972 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54974c8ff5-6tx6j"] Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.280598 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.288907 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-config-data\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.288972 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.289081 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84lwc\" (UniqueName: \"kubernetes.io/projected/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-kube-api-access-84lwc\") pod \"nova-scheduler-0\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.289143 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4970a407-4c60-4a51-9441-ae0f83326dc8-logs\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.289174 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.289308 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvr5m\" (UniqueName: \"kubernetes.io/projected/4970a407-4c60-4a51-9441-ae0f83326dc8-kube-api-access-zvr5m\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.289454 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-config-data\") pod \"nova-scheduler-0\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.294924 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.309814 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54974c8ff5-6tx6j"] Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392144 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-config-data\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392244 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392302 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84lwc\" (UniqueName: \"kubernetes.io/projected/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-kube-api-access-84lwc\") pod \"nova-scheduler-0\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392340 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4970a407-4c60-4a51-9441-ae0f83326dc8-logs\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392371 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392397 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392428 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-dns-svc\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392467 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-sb\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392486 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvr5m\" (UniqueName: \"kubernetes.io/projected/4970a407-4c60-4a51-9441-ae0f83326dc8-kube-api-access-zvr5m\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392503 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-config\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392529 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5kcq\" (UniqueName: \"kubernetes.io/projected/91e60a92-017b-4e6f-99c2-4afce0c72bbc-kube-api-access-q5kcq\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392553 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-nb\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392583 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq7mt\" (UniqueName: \"kubernetes.io/projected/a164b940-476a-412d-aca9-4bf6b718d6c8-kube-api-access-tq7mt\") pod \"nova-cell1-novncproxy-0\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392599 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.392615 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-config-data\") pod \"nova-scheduler-0\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.396637 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4970a407-4c60-4a51-9441-ae0f83326dc8-logs\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.398839 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-config-data\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.399504 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-config-data\") pod \"nova-scheduler-0\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.399794 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.421551 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.439645 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvr5m\" (UniqueName: \"kubernetes.io/projected/4970a407-4c60-4a51-9441-ae0f83326dc8-kube-api-access-zvr5m\") pod \"nova-metadata-0\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.440068 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84lwc\" (UniqueName: \"kubernetes.io/projected/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-kube-api-access-84lwc\") pod \"nova-scheduler-0\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.470099 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.501322 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.501375 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-dns-svc\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.501415 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-sb\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.501431 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-config\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.501466 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5kcq\" (UniqueName: \"kubernetes.io/projected/91e60a92-017b-4e6f-99c2-4afce0c72bbc-kube-api-access-q5kcq\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.501490 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-nb\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.501520 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq7mt\" (UniqueName: \"kubernetes.io/projected/a164b940-476a-412d-aca9-4bf6b718d6c8-kube-api-access-tq7mt\") pod \"nova-cell1-novncproxy-0\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.501535 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.502856 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-nb\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.503550 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-sb\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.502637 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-config\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.504079 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-dns-svc\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.507413 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.513260 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.542545 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5kcq\" (UniqueName: \"kubernetes.io/projected/91e60a92-017b-4e6f-99c2-4afce0c72bbc-kube-api-access-q5kcq\") pod \"dnsmasq-dns-54974c8ff5-6tx6j\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.559366 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq7mt\" (UniqueName: \"kubernetes.io/projected/a164b940-476a-412d-aca9-4bf6b718d6c8-kube-api-access-tq7mt\") pod \"nova-cell1-novncproxy-0\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.612148 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.624139 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.633121 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:12 crc kubenswrapper[5016]: I1011 07:58:12.743696 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s2g6b"] Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.036642 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:13 crc kubenswrapper[5016]: W1011 07:58:13.048644 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1e6cbd3_8533_4cd3_8dca_47f0d616608c.slice/crio-5556b983f74427dd41f2de13dc8494834a18cd1eeae3266e17efa371a0f755c5 WatchSource:0}: Error finding container 5556b983f74427dd41f2de13dc8494834a18cd1eeae3266e17efa371a0f755c5: Status 404 returned error can't find the container with id 5556b983f74427dd41f2de13dc8494834a18cd1eeae3266e17efa371a0f755c5 Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.048783 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s2g6b" event={"ID":"59bbbd97-5192-4abe-bbe4-2a532e02a4e3","Type":"ContainerStarted","Data":"d93d92c8f65b951d505f6a2f912d669a2b5a3aad2c9f730def399628600aac63"} Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.209701 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jnrs"] Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.212004 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.214938 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.214998 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.220874 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.221951 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-scripts\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.222184 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-config-data\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.222328 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.222525 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kqtg\" (UniqueName: \"kubernetes.io/projected/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-kube-api-access-5kqtg\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: W1011 07:58:13.225438 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9b52b80_9f41_400b_a0fa_9f8699c1a4e9.slice/crio-f66ace2440adf9e0bb470314f5869003463e8530cb06c91fe206800468c11e48 WatchSource:0}: Error finding container f66ace2440adf9e0bb470314f5869003463e8530cb06c91fe206800468c11e48: Status 404 returned error can't find the container with id f66ace2440adf9e0bb470314f5869003463e8530cb06c91fe206800468c11e48 Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.233016 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jnrs"] Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.240829 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.323606 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kqtg\" (UniqueName: \"kubernetes.io/projected/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-kube-api-access-5kqtg\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.323713 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-scripts\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.323765 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-config-data\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.323789 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.324315 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.330036 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-config-data\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.331392 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-scripts\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.338099 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.341542 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kqtg\" (UniqueName: \"kubernetes.io/projected/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-kube-api-access-5kqtg\") pod \"nova-cell1-conductor-db-sync-9jnrs\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.407114 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54974c8ff5-6tx6j"] Oct 11 07:58:13 crc kubenswrapper[5016]: I1011 07:58:13.611402 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.061680 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s2g6b" event={"ID":"59bbbd97-5192-4abe-bbe4-2a532e02a4e3","Type":"ContainerStarted","Data":"d0a157291a9bc74ce499300a2180049d7206032cee0349251e70730565d18892"} Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.063732 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a164b940-476a-412d-aca9-4bf6b718d6c8","Type":"ContainerStarted","Data":"938510cf3eae542aa8eb42f0e612bfa70bd8882c4b382931ea5b6d5e124919ee"} Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.074394 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4970a407-4c60-4a51-9441-ae0f83326dc8","Type":"ContainerStarted","Data":"92e16e90a2947b7714713215e3bc69233aa6002003417a6dcb2aba73b0ee0b4f"} Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.076927 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9","Type":"ContainerStarted","Data":"f66ace2440adf9e0bb470314f5869003463e8530cb06c91fe206800468c11e48"} Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.084776 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1e6cbd3-8533-4cd3-8dca-47f0d616608c","Type":"ContainerStarted","Data":"5556b983f74427dd41f2de13dc8494834a18cd1eeae3266e17efa371a0f755c5"} Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.088593 5016 generic.go:334] "Generic (PLEG): container finished" podID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" containerID="35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888" exitCode=0 Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.088629 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" event={"ID":"91e60a92-017b-4e6f-99c2-4afce0c72bbc","Type":"ContainerDied","Data":"35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888"} Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.088726 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" event={"ID":"91e60a92-017b-4e6f-99c2-4afce0c72bbc","Type":"ContainerStarted","Data":"c7a49ad00b30420bf1510374fb79f42d05b3fea8e350fcf403748267dadd0103"} Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.090498 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-s2g6b" podStartSLOduration=3.090475661 podStartE2EDuration="3.090475661s" podCreationTimestamp="2025-10-11 07:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:14.084037825 +0000 UTC m=+1081.984493771" watchObservedRunningTime="2025-10-11 07:58:14.090475661 +0000 UTC m=+1081.990931607" Oct 11 07:58:14 crc kubenswrapper[5016]: I1011 07:58:14.148610 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jnrs"] Oct 11 07:58:15 crc kubenswrapper[5016]: I1011 07:58:15.112344 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" event={"ID":"91e60a92-017b-4e6f-99c2-4afce0c72bbc","Type":"ContainerStarted","Data":"95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8"} Oct 11 07:58:15 crc kubenswrapper[5016]: I1011 07:58:15.113373 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:15 crc kubenswrapper[5016]: I1011 07:58:15.116834 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9jnrs" event={"ID":"23b43c3a-0890-454c-b2dc-79c2c29d1c3e","Type":"ContainerStarted","Data":"1a05dfa25d5a8b3052b0fbb1cccfecc502978d2201d3218abc365b91d53267ae"} Oct 11 07:58:15 crc kubenswrapper[5016]: I1011 07:58:15.132771 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" podStartSLOduration=3.1327521799999998 podStartE2EDuration="3.13275218s" podCreationTimestamp="2025-10-11 07:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:15.130172994 +0000 UTC m=+1083.030628940" watchObservedRunningTime="2025-10-11 07:58:15.13275218 +0000 UTC m=+1083.033208126" Oct 11 07:58:15 crc kubenswrapper[5016]: I1011 07:58:15.853617 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:15 crc kubenswrapper[5016]: I1011 07:58:15.870347 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 07:58:16 crc kubenswrapper[5016]: I1011 07:58:16.128338 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a164b940-476a-412d-aca9-4bf6b718d6c8","Type":"ContainerStarted","Data":"c00af0dadf80771a39f4f37ea4041dc4bdb8131ca6f8cd7e7c56134e849167c8"} Oct 11 07:58:16 crc kubenswrapper[5016]: I1011 07:58:16.128518 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="a164b940-476a-412d-aca9-4bf6b718d6c8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c00af0dadf80771a39f4f37ea4041dc4bdb8131ca6f8cd7e7c56134e849167c8" gracePeriod=30 Oct 11 07:58:16 crc kubenswrapper[5016]: I1011 07:58:16.139901 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4970a407-4c60-4a51-9441-ae0f83326dc8","Type":"ContainerStarted","Data":"e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21"} Oct 11 07:58:16 crc kubenswrapper[5016]: I1011 07:58:16.155955 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9","Type":"ContainerStarted","Data":"685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb"} Oct 11 07:58:16 crc kubenswrapper[5016]: I1011 07:58:16.172838 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1e6cbd3-8533-4cd3-8dca-47f0d616608c","Type":"ContainerStarted","Data":"7474524bf9085ab2077a7c937f6785b1af41d09f4b0f089baeaea009169d0795"} Oct 11 07:58:16 crc kubenswrapper[5016]: I1011 07:58:16.175720 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9jnrs" event={"ID":"23b43c3a-0890-454c-b2dc-79c2c29d1c3e","Type":"ContainerStarted","Data":"6a21814cd7da11ca07cd8725db7a8cf3724300c3e85dc780ee8b38a645d4acce"} Oct 11 07:58:16 crc kubenswrapper[5016]: I1011 07:58:16.177282 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.821811812 podStartE2EDuration="4.177269597s" podCreationTimestamp="2025-10-11 07:58:12 +0000 UTC" firstStartedPulling="2025-10-11 07:58:13.32887896 +0000 UTC m=+1081.229334936" lastFinishedPulling="2025-10-11 07:58:15.684336775 +0000 UTC m=+1083.584792721" observedRunningTime="2025-10-11 07:58:16.148105405 +0000 UTC m=+1084.048561351" watchObservedRunningTime="2025-10-11 07:58:16.177269597 +0000 UTC m=+1084.077725543" Oct 11 07:58:16 crc kubenswrapper[5016]: I1011 07:58:16.186819 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.721867963 podStartE2EDuration="4.186797422s" podCreationTimestamp="2025-10-11 07:58:12 +0000 UTC" firstStartedPulling="2025-10-11 07:58:13.226686834 +0000 UTC m=+1081.127142780" lastFinishedPulling="2025-10-11 07:58:15.691616293 +0000 UTC m=+1083.592072239" observedRunningTime="2025-10-11 07:58:16.173019318 +0000 UTC m=+1084.073475264" watchObservedRunningTime="2025-10-11 07:58:16.186797422 +0000 UTC m=+1084.087253368" Oct 11 07:58:16 crc kubenswrapper[5016]: I1011 07:58:16.196578 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9jnrs" podStartSLOduration=3.196562404 podStartE2EDuration="3.196562404s" podCreationTimestamp="2025-10-11 07:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:16.192452029 +0000 UTC m=+1084.092907975" watchObservedRunningTime="2025-10-11 07:58:16.196562404 +0000 UTC m=+1084.097018350" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.187179 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4970a407-4c60-4a51-9441-ae0f83326dc8","Type":"ContainerStarted","Data":"764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29"} Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.187708 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerName="nova-metadata-log" containerID="cri-o://e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21" gracePeriod=30 Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.188427 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerName="nova-metadata-metadata" containerID="cri-o://764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29" gracePeriod=30 Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.196187 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1e6cbd3-8533-4cd3-8dca-47f0d616608c","Type":"ContainerStarted","Data":"cd2283546acdefe214fd3a94da11b1bd3c65710478adcdb13e2e78b9043496a9"} Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.239612 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.607527024 podStartE2EDuration="6.239596373s" podCreationTimestamp="2025-10-11 07:58:11 +0000 UTC" firstStartedPulling="2025-10-11 07:58:13.052325228 +0000 UTC m=+1080.952781174" lastFinishedPulling="2025-10-11 07:58:15.684394567 +0000 UTC m=+1083.584850523" observedRunningTime="2025-10-11 07:58:17.237630443 +0000 UTC m=+1085.138086389" watchObservedRunningTime="2025-10-11 07:58:17.239596373 +0000 UTC m=+1085.140052319" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.245807 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.750644106 podStartE2EDuration="5.245789223s" podCreationTimestamp="2025-10-11 07:58:12 +0000 UTC" firstStartedPulling="2025-10-11 07:58:13.224635042 +0000 UTC m=+1081.125090988" lastFinishedPulling="2025-10-11 07:58:15.719780159 +0000 UTC m=+1083.620236105" observedRunningTime="2025-10-11 07:58:17.221801215 +0000 UTC m=+1085.122257161" watchObservedRunningTime="2025-10-11 07:58:17.245789223 +0000 UTC m=+1085.146245169" Oct 11 07:58:17 crc kubenswrapper[5016]: E1011 07:58:17.465165 5016 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4970a407_4c60_4a51_9441_ae0f83326dc8.slice/crio-764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4970a407_4c60_4a51_9441_ae0f83326dc8.slice/crio-conmon-764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29.scope\": RecentStats: unable to find data in memory cache]" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.471043 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.613642 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.613714 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.626949 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.767484 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.915854 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-combined-ca-bundle\") pod \"4970a407-4c60-4a51-9441-ae0f83326dc8\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.916316 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvr5m\" (UniqueName: \"kubernetes.io/projected/4970a407-4c60-4a51-9441-ae0f83326dc8-kube-api-access-zvr5m\") pod \"4970a407-4c60-4a51-9441-ae0f83326dc8\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.916379 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4970a407-4c60-4a51-9441-ae0f83326dc8-logs\") pod \"4970a407-4c60-4a51-9441-ae0f83326dc8\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.916404 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-config-data\") pod \"4970a407-4c60-4a51-9441-ae0f83326dc8\" (UID: \"4970a407-4c60-4a51-9441-ae0f83326dc8\") " Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.916824 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4970a407-4c60-4a51-9441-ae0f83326dc8-logs" (OuterVolumeSpecName: "logs") pod "4970a407-4c60-4a51-9441-ae0f83326dc8" (UID: "4970a407-4c60-4a51-9441-ae0f83326dc8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.934085 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4970a407-4c60-4a51-9441-ae0f83326dc8-kube-api-access-zvr5m" (OuterVolumeSpecName: "kube-api-access-zvr5m") pod "4970a407-4c60-4a51-9441-ae0f83326dc8" (UID: "4970a407-4c60-4a51-9441-ae0f83326dc8"). InnerVolumeSpecName "kube-api-access-zvr5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.941526 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-config-data" (OuterVolumeSpecName: "config-data") pod "4970a407-4c60-4a51-9441-ae0f83326dc8" (UID: "4970a407-4c60-4a51-9441-ae0f83326dc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:17 crc kubenswrapper[5016]: I1011 07:58:17.949043 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4970a407-4c60-4a51-9441-ae0f83326dc8" (UID: "4970a407-4c60-4a51-9441-ae0f83326dc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.018418 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.018469 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvr5m\" (UniqueName: \"kubernetes.io/projected/4970a407-4c60-4a51-9441-ae0f83326dc8-kube-api-access-zvr5m\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.018489 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4970a407-4c60-4a51-9441-ae0f83326dc8-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.018507 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4970a407-4c60-4a51-9441-ae0f83326dc8-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.205559 5016 generic.go:334] "Generic (PLEG): container finished" podID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerID="764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29" exitCode=0 Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.205600 5016 generic.go:334] "Generic (PLEG): container finished" podID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerID="e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21" exitCode=143 Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.205643 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4970a407-4c60-4a51-9441-ae0f83326dc8","Type":"ContainerDied","Data":"764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29"} Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.205708 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4970a407-4c60-4a51-9441-ae0f83326dc8","Type":"ContainerDied","Data":"e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21"} Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.205724 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4970a407-4c60-4a51-9441-ae0f83326dc8","Type":"ContainerDied","Data":"92e16e90a2947b7714713215e3bc69233aa6002003417a6dcb2aba73b0ee0b4f"} Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.205728 5016 scope.go:117] "RemoveContainer" containerID="764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.205625 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.256942 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.260483 5016 scope.go:117] "RemoveContainer" containerID="e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.262001 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.287609 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:18 crc kubenswrapper[5016]: E1011 07:58:18.288173 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerName="nova-metadata-log" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.288199 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerName="nova-metadata-log" Oct 11 07:58:18 crc kubenswrapper[5016]: E1011 07:58:18.288215 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerName="nova-metadata-metadata" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.288224 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerName="nova-metadata-metadata" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.289898 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerName="nova-metadata-metadata" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.289926 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4970a407-4c60-4a51-9441-ae0f83326dc8" containerName="nova-metadata-log" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.291907 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.303826 5016 scope.go:117] "RemoveContainer" containerID="764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29" Oct 11 07:58:18 crc kubenswrapper[5016]: E1011 07:58:18.304318 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29\": container with ID starting with 764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29 not found: ID does not exist" containerID="764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.304394 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29"} err="failed to get container status \"764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29\": rpc error: code = NotFound desc = could not find container \"764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29\": container with ID starting with 764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29 not found: ID does not exist" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.304428 5016 scope.go:117] "RemoveContainer" containerID="e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21" Oct 11 07:58:18 crc kubenswrapper[5016]: E1011 07:58:18.304693 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21\": container with ID starting with e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21 not found: ID does not exist" containerID="e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.304721 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21"} err="failed to get container status \"e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21\": rpc error: code = NotFound desc = could not find container \"e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21\": container with ID starting with e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21 not found: ID does not exist" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.304737 5016 scope.go:117] "RemoveContainer" containerID="764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.304931 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29"} err="failed to get container status \"764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29\": rpc error: code = NotFound desc = could not find container \"764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29\": container with ID starting with 764f088d1ab6601809584b166cffa482018c5bc41e2994133ae0ab14e4f51a29 not found: ID does not exist" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.304956 5016 scope.go:117] "RemoveContainer" containerID="e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.305317 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21"} err="failed to get container status \"e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21\": rpc error: code = NotFound desc = could not find container \"e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21\": container with ID starting with e07781268938984b0f72b5af143f0e1b78cacbd9817bda3b232920fdaed7db21 not found: ID does not exist" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.307902 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.308166 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.311418 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.425892 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.425955 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgg27\" (UniqueName: \"kubernetes.io/projected/d12d9caa-a2b3-4797-8412-7f72724c86c9-kube-api-access-fgg27\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.426153 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.426399 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d12d9caa-a2b3-4797-8412-7f72724c86c9-logs\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.426465 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-config-data\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.528197 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.528271 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgg27\" (UniqueName: \"kubernetes.io/projected/d12d9caa-a2b3-4797-8412-7f72724c86c9-kube-api-access-fgg27\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.528622 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.528716 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d12d9caa-a2b3-4797-8412-7f72724c86c9-logs\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.528745 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-config-data\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.529051 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d12d9caa-a2b3-4797-8412-7f72724c86c9-logs\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.533951 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.540384 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.541052 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-config-data\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.543909 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgg27\" (UniqueName: \"kubernetes.io/projected/d12d9caa-a2b3-4797-8412-7f72724c86c9-kube-api-access-fgg27\") pod \"nova-metadata-0\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " pod="openstack/nova-metadata-0" Oct 11 07:58:18 crc kubenswrapper[5016]: I1011 07:58:18.627239 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:19 crc kubenswrapper[5016]: I1011 07:58:19.080327 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:19 crc kubenswrapper[5016]: I1011 07:58:19.142172 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4970a407-4c60-4a51-9441-ae0f83326dc8" path="/var/lib/kubelet/pods/4970a407-4c60-4a51-9441-ae0f83326dc8/volumes" Oct 11 07:58:19 crc kubenswrapper[5016]: I1011 07:58:19.215350 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d12d9caa-a2b3-4797-8412-7f72724c86c9","Type":"ContainerStarted","Data":"35fc73c92a5edcd2ca9b195c41503619cff6db5d8c08bdd16d8c6322757130ea"} Oct 11 07:58:20 crc kubenswrapper[5016]: I1011 07:58:20.227885 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d12d9caa-a2b3-4797-8412-7f72724c86c9","Type":"ContainerStarted","Data":"77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7"} Oct 11 07:58:20 crc kubenswrapper[5016]: I1011 07:58:20.228295 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d12d9caa-a2b3-4797-8412-7f72724c86c9","Type":"ContainerStarted","Data":"dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86"} Oct 11 07:58:20 crc kubenswrapper[5016]: I1011 07:58:20.254485 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.254463124 podStartE2EDuration="2.254463124s" podCreationTimestamp="2025-10-11 07:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:20.247905435 +0000 UTC m=+1088.148361401" watchObservedRunningTime="2025-10-11 07:58:20.254463124 +0000 UTC m=+1088.154919070" Oct 11 07:58:21 crc kubenswrapper[5016]: I1011 07:58:21.236812 5016 generic.go:334] "Generic (PLEG): container finished" podID="59bbbd97-5192-4abe-bbe4-2a532e02a4e3" containerID="d0a157291a9bc74ce499300a2180049d7206032cee0349251e70730565d18892" exitCode=0 Oct 11 07:58:21 crc kubenswrapper[5016]: I1011 07:58:21.237576 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s2g6b" event={"ID":"59bbbd97-5192-4abe-bbe4-2a532e02a4e3","Type":"ContainerDied","Data":"d0a157291a9bc74ce499300a2180049d7206032cee0349251e70730565d18892"} Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.296154 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.296434 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.472337 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.505415 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.635586 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.699263 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.706608 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85494b87f-4xhlv"] Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.706886 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" podUID="3384fa61-3001-4106-ac87-67d3e3ca0513" containerName="dnsmasq-dns" containerID="cri-o://86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13" gracePeriod=10 Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.810930 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-combined-ca-bundle\") pod \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.811051 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-scripts\") pod \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.811102 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-config-data\") pod \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.811183 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwsjk\" (UniqueName: \"kubernetes.io/projected/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-kube-api-access-wwsjk\") pod \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\" (UID: \"59bbbd97-5192-4abe-bbe4-2a532e02a4e3\") " Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.840567 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-kube-api-access-wwsjk" (OuterVolumeSpecName: "kube-api-access-wwsjk") pod "59bbbd97-5192-4abe-bbe4-2a532e02a4e3" (UID: "59bbbd97-5192-4abe-bbe4-2a532e02a4e3"). InnerVolumeSpecName "kube-api-access-wwsjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.844053 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-scripts" (OuterVolumeSpecName: "scripts") pod "59bbbd97-5192-4abe-bbe4-2a532e02a4e3" (UID: "59bbbd97-5192-4abe-bbe4-2a532e02a4e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.845463 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-config-data" (OuterVolumeSpecName: "config-data") pod "59bbbd97-5192-4abe-bbe4-2a532e02a4e3" (UID: "59bbbd97-5192-4abe-bbe4-2a532e02a4e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.852328 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59bbbd97-5192-4abe-bbe4-2a532e02a4e3" (UID: "59bbbd97-5192-4abe-bbe4-2a532e02a4e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.914551 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwsjk\" (UniqueName: \"kubernetes.io/projected/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-kube-api-access-wwsjk\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.914584 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.914594 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:22 crc kubenswrapper[5016]: I1011 07:58:22.914602 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59bbbd97-5192-4abe-bbe4-2a532e02a4e3-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.193587 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.255554 5016 generic.go:334] "Generic (PLEG): container finished" podID="23b43c3a-0890-454c-b2dc-79c2c29d1c3e" containerID="6a21814cd7da11ca07cd8725db7a8cf3724300c3e85dc780ee8b38a645d4acce" exitCode=0 Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.255641 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9jnrs" event={"ID":"23b43c3a-0890-454c-b2dc-79c2c29d1c3e","Type":"ContainerDied","Data":"6a21814cd7da11ca07cd8725db7a8cf3724300c3e85dc780ee8b38a645d4acce"} Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.257765 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s2g6b" event={"ID":"59bbbd97-5192-4abe-bbe4-2a532e02a4e3","Type":"ContainerDied","Data":"d93d92c8f65b951d505f6a2f912d669a2b5a3aad2c9f730def399628600aac63"} Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.257814 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93d92c8f65b951d505f6a2f912d669a2b5a3aad2c9f730def399628600aac63" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.257869 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s2g6b" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.260729 5016 generic.go:334] "Generic (PLEG): container finished" podID="3384fa61-3001-4106-ac87-67d3e3ca0513" containerID="86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13" exitCode=0 Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.261393 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.261579 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" event={"ID":"3384fa61-3001-4106-ac87-67d3e3ca0513","Type":"ContainerDied","Data":"86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13"} Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.261604 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85494b87f-4xhlv" event={"ID":"3384fa61-3001-4106-ac87-67d3e3ca0513","Type":"ContainerDied","Data":"845bc38304680e1559e83f75095c3ea81fa01aa2df8dd4cb9b9ccdfebbee27d2"} Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.261707 5016 scope.go:117] "RemoveContainer" containerID="86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.289800 5016 scope.go:117] "RemoveContainer" containerID="b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.295692 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.310218 5016 scope.go:117] "RemoveContainer" containerID="86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13" Oct 11 07:58:23 crc kubenswrapper[5016]: E1011 07:58:23.310737 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13\": container with ID starting with 86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13 not found: ID does not exist" containerID="86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.310765 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13"} err="failed to get container status \"86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13\": rpc error: code = NotFound desc = could not find container \"86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13\": container with ID starting with 86fe37e0390a12f3480d1458d2577e102d98a2835f42f5673bd57934cd009a13 not found: ID does not exist" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.310785 5016 scope.go:117] "RemoveContainer" containerID="b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f" Oct 11 07:58:23 crc kubenswrapper[5016]: E1011 07:58:23.311019 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f\": container with ID starting with b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f not found: ID does not exist" containerID="b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.311036 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f"} err="failed to get container status \"b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f\": rpc error: code = NotFound desc = could not find container \"b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f\": container with ID starting with b5dbb870288f2d0063f31353d7ee2c9bbbd27f6ce6c5464dae4a36d7151fcd0f not found: ID does not exist" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.320758 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-nb\") pod \"3384fa61-3001-4106-ac87-67d3e3ca0513\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.320886 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x778h\" (UniqueName: \"kubernetes.io/projected/3384fa61-3001-4106-ac87-67d3e3ca0513-kube-api-access-x778h\") pod \"3384fa61-3001-4106-ac87-67d3e3ca0513\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.320965 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-config\") pod \"3384fa61-3001-4106-ac87-67d3e3ca0513\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.321012 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-sb\") pod \"3384fa61-3001-4106-ac87-67d3e3ca0513\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.321059 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-dns-svc\") pod \"3384fa61-3001-4106-ac87-67d3e3ca0513\" (UID: \"3384fa61-3001-4106-ac87-67d3e3ca0513\") " Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.326460 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3384fa61-3001-4106-ac87-67d3e3ca0513-kube-api-access-x778h" (OuterVolumeSpecName: "kube-api-access-x778h") pod "3384fa61-3001-4106-ac87-67d3e3ca0513" (UID: "3384fa61-3001-4106-ac87-67d3e3ca0513"). InnerVolumeSpecName "kube-api-access-x778h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.362746 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3384fa61-3001-4106-ac87-67d3e3ca0513" (UID: "3384fa61-3001-4106-ac87-67d3e3ca0513"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.363265 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3384fa61-3001-4106-ac87-67d3e3ca0513" (UID: "3384fa61-3001-4106-ac87-67d3e3ca0513"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.369687 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3384fa61-3001-4106-ac87-67d3e3ca0513" (UID: "3384fa61-3001-4106-ac87-67d3e3ca0513"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.372594 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-config" (OuterVolumeSpecName: "config") pod "3384fa61-3001-4106-ac87-67d3e3ca0513" (UID: "3384fa61-3001-4106-ac87-67d3e3ca0513"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.378871 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.379162 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.423854 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.423884 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x778h\" (UniqueName: \"kubernetes.io/projected/3384fa61-3001-4106-ac87-67d3e3ca0513-kube-api-access-x778h\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.423896 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.423904 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.423912 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3384fa61-3001-4106-ac87-67d3e3ca0513-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.430868 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.431274 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-log" containerID="cri-o://7474524bf9085ab2077a7c937f6785b1af41d09f4b0f089baeaea009169d0795" gracePeriod=30 Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.431394 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-api" containerID="cri-o://cd2283546acdefe214fd3a94da11b1bd3c65710478adcdb13e2e78b9043496a9" gracePeriod=30 Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.472486 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.472717 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerName="nova-metadata-log" containerID="cri-o://dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86" gracePeriod=30 Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.472844 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerName="nova-metadata-metadata" containerID="cri-o://77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7" gracePeriod=30 Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.606011 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85494b87f-4xhlv"] Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.618476 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85494b87f-4xhlv"] Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.628625 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.628702 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 07:58:23 crc kubenswrapper[5016]: I1011 07:58:23.722206 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.120751 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.263068 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-config-data\") pod \"d12d9caa-a2b3-4797-8412-7f72724c86c9\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.263149 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-nova-metadata-tls-certs\") pod \"d12d9caa-a2b3-4797-8412-7f72724c86c9\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.263272 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-combined-ca-bundle\") pod \"d12d9caa-a2b3-4797-8412-7f72724c86c9\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.263378 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d12d9caa-a2b3-4797-8412-7f72724c86c9-logs\") pod \"d12d9caa-a2b3-4797-8412-7f72724c86c9\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.263436 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgg27\" (UniqueName: \"kubernetes.io/projected/d12d9caa-a2b3-4797-8412-7f72724c86c9-kube-api-access-fgg27\") pod \"d12d9caa-a2b3-4797-8412-7f72724c86c9\" (UID: \"d12d9caa-a2b3-4797-8412-7f72724c86c9\") " Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.265274 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d12d9caa-a2b3-4797-8412-7f72724c86c9-logs" (OuterVolumeSpecName: "logs") pod "d12d9caa-a2b3-4797-8412-7f72724c86c9" (UID: "d12d9caa-a2b3-4797-8412-7f72724c86c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.289447 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12d9caa-a2b3-4797-8412-7f72724c86c9-kube-api-access-fgg27" (OuterVolumeSpecName: "kube-api-access-fgg27") pod "d12d9caa-a2b3-4797-8412-7f72724c86c9" (UID: "d12d9caa-a2b3-4797-8412-7f72724c86c9"). InnerVolumeSpecName "kube-api-access-fgg27". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.289728 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.289760 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d12d9caa-a2b3-4797-8412-7f72724c86c9","Type":"ContainerDied","Data":"77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7"} Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.289875 5016 scope.go:117] "RemoveContainer" containerID="77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.289704 5016 generic.go:334] "Generic (PLEG): container finished" podID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerID="77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7" exitCode=0 Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.290569 5016 generic.go:334] "Generic (PLEG): container finished" podID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerID="dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86" exitCode=143 Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.290841 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d12d9caa-a2b3-4797-8412-7f72724c86c9","Type":"ContainerDied","Data":"dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86"} Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.290905 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d12d9caa-a2b3-4797-8412-7f72724c86c9","Type":"ContainerDied","Data":"35fc73c92a5edcd2ca9b195c41503619cff6db5d8c08bdd16d8c6322757130ea"} Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.304895 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d12d9caa-a2b3-4797-8412-7f72724c86c9" (UID: "d12d9caa-a2b3-4797-8412-7f72724c86c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.306740 5016 generic.go:334] "Generic (PLEG): container finished" podID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerID="7474524bf9085ab2077a7c937f6785b1af41d09f4b0f089baeaea009169d0795" exitCode=143 Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.306920 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1e6cbd3-8533-4cd3-8dca-47f0d616608c","Type":"ContainerDied","Data":"7474524bf9085ab2077a7c937f6785b1af41d09f4b0f089baeaea009169d0795"} Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.329712 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-config-data" (OuterVolumeSpecName: "config-data") pod "d12d9caa-a2b3-4797-8412-7f72724c86c9" (UID: "d12d9caa-a2b3-4797-8412-7f72724c86c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.340519 5016 scope.go:117] "RemoveContainer" containerID="dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.344149 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d12d9caa-a2b3-4797-8412-7f72724c86c9" (UID: "d12d9caa-a2b3-4797-8412-7f72724c86c9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.358882 5016 scope.go:117] "RemoveContainer" containerID="77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7" Oct 11 07:58:24 crc kubenswrapper[5016]: E1011 07:58:24.359234 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7\": container with ID starting with 77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7 not found: ID does not exist" containerID="77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.359282 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7"} err="failed to get container status \"77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7\": rpc error: code = NotFound desc = could not find container \"77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7\": container with ID starting with 77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7 not found: ID does not exist" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.359311 5016 scope.go:117] "RemoveContainer" containerID="dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86" Oct 11 07:58:24 crc kubenswrapper[5016]: E1011 07:58:24.359804 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86\": container with ID starting with dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86 not found: ID does not exist" containerID="dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.359829 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86"} err="failed to get container status \"dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86\": rpc error: code = NotFound desc = could not find container \"dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86\": container with ID starting with dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86 not found: ID does not exist" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.359849 5016 scope.go:117] "RemoveContainer" containerID="77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.360099 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7"} err="failed to get container status \"77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7\": rpc error: code = NotFound desc = could not find container \"77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7\": container with ID starting with 77da394c8bd9286b284f673aeada348d71979200e25a6486fb6a36f99cde52f7 not found: ID does not exist" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.360118 5016 scope.go:117] "RemoveContainer" containerID="dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.360440 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86"} err="failed to get container status \"dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86\": rpc error: code = NotFound desc = could not find container \"dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86\": container with ID starting with dbb3bcad9a2a9a31900e4c57e8ff3e65f13ac5744e129d74f12390dd2ac38a86 not found: ID does not exist" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.366110 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.366136 5016 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.366146 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12d9caa-a2b3-4797-8412-7f72724c86c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.366156 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d12d9caa-a2b3-4797-8412-7f72724c86c9-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.366165 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgg27\" (UniqueName: \"kubernetes.io/projected/d12d9caa-a2b3-4797-8412-7f72724c86c9-kube-api-access-fgg27\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.587951 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.671300 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-config-data\") pod \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.671772 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-scripts\") pod \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.671831 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-combined-ca-bundle\") pod \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.671853 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kqtg\" (UniqueName: \"kubernetes.io/projected/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-kube-api-access-5kqtg\") pod \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\" (UID: \"23b43c3a-0890-454c-b2dc-79c2c29d1c3e\") " Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.673560 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.685647 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-scripts" (OuterVolumeSpecName: "scripts") pod "23b43c3a-0890-454c-b2dc-79c2c29d1c3e" (UID: "23b43c3a-0890-454c-b2dc-79c2c29d1c3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.686487 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-kube-api-access-5kqtg" (OuterVolumeSpecName: "kube-api-access-5kqtg") pod "23b43c3a-0890-454c-b2dc-79c2c29d1c3e" (UID: "23b43c3a-0890-454c-b2dc-79c2c29d1c3e"). InnerVolumeSpecName "kube-api-access-5kqtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.716733 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.719419 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23b43c3a-0890-454c-b2dc-79c2c29d1c3e" (UID: "23b43c3a-0890-454c-b2dc-79c2c29d1c3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.747665 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:24 crc kubenswrapper[5016]: E1011 07:58:24.748110 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerName="nova-metadata-log" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748127 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerName="nova-metadata-log" Oct 11 07:58:24 crc kubenswrapper[5016]: E1011 07:58:24.748143 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3384fa61-3001-4106-ac87-67d3e3ca0513" containerName="init" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748150 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="3384fa61-3001-4106-ac87-67d3e3ca0513" containerName="init" Oct 11 07:58:24 crc kubenswrapper[5016]: E1011 07:58:24.748169 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3384fa61-3001-4106-ac87-67d3e3ca0513" containerName="dnsmasq-dns" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748177 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="3384fa61-3001-4106-ac87-67d3e3ca0513" containerName="dnsmasq-dns" Oct 11 07:58:24 crc kubenswrapper[5016]: E1011 07:58:24.748194 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bbbd97-5192-4abe-bbe4-2a532e02a4e3" containerName="nova-manage" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748201 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bbbd97-5192-4abe-bbe4-2a532e02a4e3" containerName="nova-manage" Oct 11 07:58:24 crc kubenswrapper[5016]: E1011 07:58:24.748216 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerName="nova-metadata-metadata" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748224 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerName="nova-metadata-metadata" Oct 11 07:58:24 crc kubenswrapper[5016]: E1011 07:58:24.748245 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b43c3a-0890-454c-b2dc-79c2c29d1c3e" containerName="nova-cell1-conductor-db-sync" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748252 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b43c3a-0890-454c-b2dc-79c2c29d1c3e" containerName="nova-cell1-conductor-db-sync" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748458 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerName="nova-metadata-metadata" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748476 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12d9caa-a2b3-4797-8412-7f72724c86c9" containerName="nova-metadata-log" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748488 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="3384fa61-3001-4106-ac87-67d3e3ca0513" containerName="dnsmasq-dns" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748503 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b43c3a-0890-454c-b2dc-79c2c29d1c3e" containerName="nova-cell1-conductor-db-sync" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.748519 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="59bbbd97-5192-4abe-bbe4-2a532e02a4e3" containerName="nova-manage" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.749616 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.753046 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.753097 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.774844 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.774879 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.774892 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kqtg\" (UniqueName: \"kubernetes.io/projected/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-kube-api-access-5kqtg\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.775786 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-config-data" (OuterVolumeSpecName: "config-data") pod "23b43c3a-0890-454c-b2dc-79c2c29d1c3e" (UID: "23b43c3a-0890-454c-b2dc-79c2c29d1c3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.791719 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.876773 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.876914 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.876946 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862c465e-619c-4fed-adf2-fe7d93b46937-logs\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.876970 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgfnv\" (UniqueName: \"kubernetes.io/projected/862c465e-619c-4fed-adf2-fe7d93b46937-kube-api-access-qgfnv\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.876990 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-config-data\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.877043 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23b43c3a-0890-454c-b2dc-79c2c29d1c3e-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.978547 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.978588 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862c465e-619c-4fed-adf2-fe7d93b46937-logs\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.978617 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgfnv\" (UniqueName: \"kubernetes.io/projected/862c465e-619c-4fed-adf2-fe7d93b46937-kube-api-access-qgfnv\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.978650 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-config-data\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.978716 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.979426 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862c465e-619c-4fed-adf2-fe7d93b46937-logs\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.981864 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.982418 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.983279 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-config-data\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:24 crc kubenswrapper[5016]: I1011 07:58:24.996219 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgfnv\" (UniqueName: \"kubernetes.io/projected/862c465e-619c-4fed-adf2-fe7d93b46937-kube-api-access-qgfnv\") pod \"nova-metadata-0\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " pod="openstack/nova-metadata-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.143375 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.161148 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3384fa61-3001-4106-ac87-67d3e3ca0513" path="/var/lib/kubelet/pods/3384fa61-3001-4106-ac87-67d3e3ca0513/volumes" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.162702 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d12d9caa-a2b3-4797-8412-7f72724c86c9" path="/var/lib/kubelet/pods/d12d9caa-a2b3-4797-8412-7f72724c86c9/volumes" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.321322 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9jnrs" event={"ID":"23b43c3a-0890-454c-b2dc-79c2c29d1c3e","Type":"ContainerDied","Data":"1a05dfa25d5a8b3052b0fbb1cccfecc502978d2201d3218abc365b91d53267ae"} Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.321692 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a05dfa25d5a8b3052b0fbb1cccfecc502978d2201d3218abc365b91d53267ae" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.321767 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9jnrs" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.327411 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" containerName="nova-scheduler-scheduler" containerID="cri-o://685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb" gracePeriod=30 Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.356469 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.357722 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.360035 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.367159 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.500209 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxfc\" (UniqueName: \"kubernetes.io/projected/83362aa1-9b92-4fb8-8ade-5ba3476c53d0-kube-api-access-ltxfc\") pod \"nova-cell1-conductor-0\" (UID: \"83362aa1-9b92-4fb8-8ade-5ba3476c53d0\") " pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.500321 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83362aa1-9b92-4fb8-8ade-5ba3476c53d0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"83362aa1-9b92-4fb8-8ade-5ba3476c53d0\") " pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.500382 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83362aa1-9b92-4fb8-8ade-5ba3476c53d0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"83362aa1-9b92-4fb8-8ade-5ba3476c53d0\") " pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.604894 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83362aa1-9b92-4fb8-8ade-5ba3476c53d0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"83362aa1-9b92-4fb8-8ade-5ba3476c53d0\") " pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.604995 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83362aa1-9b92-4fb8-8ade-5ba3476c53d0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"83362aa1-9b92-4fb8-8ade-5ba3476c53d0\") " pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.605067 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltxfc\" (UniqueName: \"kubernetes.io/projected/83362aa1-9b92-4fb8-8ade-5ba3476c53d0-kube-api-access-ltxfc\") pod \"nova-cell1-conductor-0\" (UID: \"83362aa1-9b92-4fb8-8ade-5ba3476c53d0\") " pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.612279 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83362aa1-9b92-4fb8-8ade-5ba3476c53d0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"83362aa1-9b92-4fb8-8ade-5ba3476c53d0\") " pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.614391 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83362aa1-9b92-4fb8-8ade-5ba3476c53d0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"83362aa1-9b92-4fb8-8ade-5ba3476c53d0\") " pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.629599 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltxfc\" (UniqueName: \"kubernetes.io/projected/83362aa1-9b92-4fb8-8ade-5ba3476c53d0-kube-api-access-ltxfc\") pod \"nova-cell1-conductor-0\" (UID: \"83362aa1-9b92-4fb8-8ade-5ba3476c53d0\") " pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.646326 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:58:25 crc kubenswrapper[5016]: I1011 07:58:25.682701 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:26 crc kubenswrapper[5016]: I1011 07:58:26.113727 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 11 07:58:26 crc kubenswrapper[5016]: W1011 07:58:26.124388 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83362aa1_9b92_4fb8_8ade_5ba3476c53d0.slice/crio-188c76f8f93985e86825a8be80a2a91691cd2b3ff157b799e330d0ff7469b6cc WatchSource:0}: Error finding container 188c76f8f93985e86825a8be80a2a91691cd2b3ff157b799e330d0ff7469b6cc: Status 404 returned error can't find the container with id 188c76f8f93985e86825a8be80a2a91691cd2b3ff157b799e330d0ff7469b6cc Oct 11 07:58:26 crc kubenswrapper[5016]: I1011 07:58:26.339814 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"83362aa1-9b92-4fb8-8ade-5ba3476c53d0","Type":"ContainerStarted","Data":"c15f5d4d2a129f6535cc4eda0468dcae4f38472ec75907ab1192f6d50917b741"} Oct 11 07:58:26 crc kubenswrapper[5016]: I1011 07:58:26.340291 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:26 crc kubenswrapper[5016]: I1011 07:58:26.340306 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"83362aa1-9b92-4fb8-8ade-5ba3476c53d0","Type":"ContainerStarted","Data":"188c76f8f93985e86825a8be80a2a91691cd2b3ff157b799e330d0ff7469b6cc"} Oct 11 07:58:26 crc kubenswrapper[5016]: I1011 07:58:26.343722 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862c465e-619c-4fed-adf2-fe7d93b46937","Type":"ContainerStarted","Data":"4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be"} Oct 11 07:58:26 crc kubenswrapper[5016]: I1011 07:58:26.343749 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862c465e-619c-4fed-adf2-fe7d93b46937","Type":"ContainerStarted","Data":"6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08"} Oct 11 07:58:26 crc kubenswrapper[5016]: I1011 07:58:26.343758 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862c465e-619c-4fed-adf2-fe7d93b46937","Type":"ContainerStarted","Data":"f7217f8fcad0f3f4a6f5b136ae736ecc5815c31784ad12a0319db1c2becf22fc"} Oct 11 07:58:26 crc kubenswrapper[5016]: I1011 07:58:26.358399 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.3583819670000001 podStartE2EDuration="1.358381967s" podCreationTimestamp="2025-10-11 07:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:26.356348175 +0000 UTC m=+1094.256804151" watchObservedRunningTime="2025-10-11 07:58:26.358381967 +0000 UTC m=+1094.258837913" Oct 11 07:58:26 crc kubenswrapper[5016]: I1011 07:58:26.379726 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.3797078369999998 podStartE2EDuration="2.379707837s" podCreationTimestamp="2025-10-11 07:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:26.375539769 +0000 UTC m=+1094.275995735" watchObservedRunningTime="2025-10-11 07:58:26.379707837 +0000 UTC m=+1094.280163793" Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.355995 5016 generic.go:334] "Generic (PLEG): container finished" podID="d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" containerID="685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb" exitCode=0 Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.356116 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9","Type":"ContainerDied","Data":"685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb"} Oct 11 07:58:27 crc kubenswrapper[5016]: E1011 07:58:27.494429 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb is running failed: container process not found" containerID="685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 07:58:27 crc kubenswrapper[5016]: E1011 07:58:27.494846 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb is running failed: container process not found" containerID="685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 07:58:27 crc kubenswrapper[5016]: E1011 07:58:27.495191 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb is running failed: container process not found" containerID="685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 07:58:27 crc kubenswrapper[5016]: E1011 07:58:27.495239 5016 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" containerName="nova-scheduler-scheduler" Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.756396 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.854934 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-config-data\") pod \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.855482 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-combined-ca-bundle\") pod \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.855533 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84lwc\" (UniqueName: \"kubernetes.io/projected/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-kube-api-access-84lwc\") pod \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\" (UID: \"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9\") " Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.860745 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-kube-api-access-84lwc" (OuterVolumeSpecName: "kube-api-access-84lwc") pod "d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" (UID: "d9b52b80-9f41-400b-a0fa-9f8699c1a4e9"). InnerVolumeSpecName "kube-api-access-84lwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.883421 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-config-data" (OuterVolumeSpecName: "config-data") pod "d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" (UID: "d9b52b80-9f41-400b-a0fa-9f8699c1a4e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.884413 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" (UID: "d9b52b80-9f41-400b-a0fa-9f8699c1a4e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.957286 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.957514 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:27 crc kubenswrapper[5016]: I1011 07:58:27.957596 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84lwc\" (UniqueName: \"kubernetes.io/projected/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9-kube-api-access-84lwc\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.365824 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d9b52b80-9f41-400b-a0fa-9f8699c1a4e9","Type":"ContainerDied","Data":"f66ace2440adf9e0bb470314f5869003463e8530cb06c91fe206800468c11e48"} Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.365882 5016 scope.go:117] "RemoveContainer" containerID="685757c6a34cd7cd3336093fd18eb96384df826e81debc238fecd2d42f1a5fcb" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.365941 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.403384 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.410784 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.421555 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:58:28 crc kubenswrapper[5016]: E1011 07:58:28.428169 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" containerName="nova-scheduler-scheduler" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.428209 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" containerName="nova-scheduler-scheduler" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.428405 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" containerName="nova-scheduler-scheduler" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.429134 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.431259 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.435822 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.570195 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-config-data\") pod \"nova-scheduler-0\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.570281 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54tjp\" (UniqueName: \"kubernetes.io/projected/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-kube-api-access-54tjp\") pod \"nova-scheduler-0\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.570398 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.671991 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.672088 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-config-data\") pod \"nova-scheduler-0\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.672130 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54tjp\" (UniqueName: \"kubernetes.io/projected/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-kube-api-access-54tjp\") pod \"nova-scheduler-0\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.675867 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.676967 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-config-data\") pod \"nova-scheduler-0\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.703356 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54tjp\" (UniqueName: \"kubernetes.io/projected/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-kube-api-access-54tjp\") pod \"nova-scheduler-0\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " pod="openstack/nova-scheduler-0" Oct 11 07:58:28 crc kubenswrapper[5016]: I1011 07:58:28.747200 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.145320 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9b52b80-9f41-400b-a0fa-9f8699c1a4e9" path="/var/lib/kubelet/pods/d9b52b80-9f41-400b-a0fa-9f8699c1a4e9/volumes" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.218208 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.380692 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ca94bc65-ebb1-4ccb-bff6-7645d39255fd","Type":"ContainerStarted","Data":"9f0b8cd9899e31a4802c9f4d91143c47fda16cbae84e782b9168eed22fb46e01"} Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.384412 5016 generic.go:334] "Generic (PLEG): container finished" podID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerID="cd2283546acdefe214fd3a94da11b1bd3c65710478adcdb13e2e78b9043496a9" exitCode=0 Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.384479 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1e6cbd3-8533-4cd3-8dca-47f0d616608c","Type":"ContainerDied","Data":"cd2283546acdefe214fd3a94da11b1bd3c65710478adcdb13e2e78b9043496a9"} Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.384515 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f1e6cbd3-8533-4cd3-8dca-47f0d616608c","Type":"ContainerDied","Data":"5556b983f74427dd41f2de13dc8494834a18cd1eeae3266e17efa371a0f755c5"} Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.384533 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5556b983f74427dd41f2de13dc8494834a18cd1eeae3266e17efa371a0f755c5" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.398306 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.486600 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-logs\") pod \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.486690 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh68r\" (UniqueName: \"kubernetes.io/projected/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-kube-api-access-hh68r\") pod \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.486799 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-config-data\") pod \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.487026 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-combined-ca-bundle\") pod \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\" (UID: \"f1e6cbd3-8533-4cd3-8dca-47f0d616608c\") " Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.487261 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-logs" (OuterVolumeSpecName: "logs") pod "f1e6cbd3-8533-4cd3-8dca-47f0d616608c" (UID: "f1e6cbd3-8533-4cd3-8dca-47f0d616608c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.487622 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.493843 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-kube-api-access-hh68r" (OuterVolumeSpecName: "kube-api-access-hh68r") pod "f1e6cbd3-8533-4cd3-8dca-47f0d616608c" (UID: "f1e6cbd3-8533-4cd3-8dca-47f0d616608c"). InnerVolumeSpecName "kube-api-access-hh68r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.523570 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-config-data" (OuterVolumeSpecName: "config-data") pod "f1e6cbd3-8533-4cd3-8dca-47f0d616608c" (UID: "f1e6cbd3-8533-4cd3-8dca-47f0d616608c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.525945 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1e6cbd3-8533-4cd3-8dca-47f0d616608c" (UID: "f1e6cbd3-8533-4cd3-8dca-47f0d616608c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.589044 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh68r\" (UniqueName: \"kubernetes.io/projected/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-kube-api-access-hh68r\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.589090 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:29 crc kubenswrapper[5016]: I1011 07:58:29.589103 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1e6cbd3-8533-4cd3-8dca-47f0d616608c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.143511 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.143593 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.395041 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.395013 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ca94bc65-ebb1-4ccb-bff6-7645d39255fd","Type":"ContainerStarted","Data":"2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654"} Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.421908 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.421892451 podStartE2EDuration="2.421892451s" podCreationTimestamp="2025-10-11 07:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:30.412438378 +0000 UTC m=+1098.312894324" watchObservedRunningTime="2025-10-11 07:58:30.421892451 +0000 UTC m=+1098.322348397" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.434685 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.454721 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.468704 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:30 crc kubenswrapper[5016]: E1011 07:58:30.469102 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-api" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.469127 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-api" Oct 11 07:58:30 crc kubenswrapper[5016]: E1011 07:58:30.469144 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-log" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.469153 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-log" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.469423 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-log" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.469449 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" containerName="nova-api-api" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.470539 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.472916 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.478645 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.607216 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5907d55e-d11c-4aae-be25-381ad731178b-logs\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.607338 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kkv6\" (UniqueName: \"kubernetes.io/projected/5907d55e-d11c-4aae-be25-381ad731178b-kube-api-access-5kkv6\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.607377 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.607443 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-config-data\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.709230 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5907d55e-d11c-4aae-be25-381ad731178b-logs\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.709285 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kkv6\" (UniqueName: \"kubernetes.io/projected/5907d55e-d11c-4aae-be25-381ad731178b-kube-api-access-5kkv6\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.709307 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.709336 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-config-data\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.709610 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5907d55e-d11c-4aae-be25-381ad731178b-logs\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.714916 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.716394 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-config-data\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.724107 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kkv6\" (UniqueName: \"kubernetes.io/projected/5907d55e-d11c-4aae-be25-381ad731178b-kube-api-access-5kkv6\") pod \"nova-api-0\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " pod="openstack/nova-api-0" Oct 11 07:58:30 crc kubenswrapper[5016]: I1011 07:58:30.784587 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:31 crc kubenswrapper[5016]: I1011 07:58:31.155420 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e6cbd3-8533-4cd3-8dca-47f0d616608c" path="/var/lib/kubelet/pods/f1e6cbd3-8533-4cd3-8dca-47f0d616608c/volumes" Oct 11 07:58:31 crc kubenswrapper[5016]: I1011 07:58:31.235467 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:31 crc kubenswrapper[5016]: I1011 07:58:31.346829 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 11 07:58:31 crc kubenswrapper[5016]: I1011 07:58:31.410882 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5907d55e-d11c-4aae-be25-381ad731178b","Type":"ContainerStarted","Data":"6ea803006811ab411d7a75019f3f554da469f6e1ceba67e377f531b0806dca0f"} Oct 11 07:58:32 crc kubenswrapper[5016]: I1011 07:58:32.422066 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5907d55e-d11c-4aae-be25-381ad731178b","Type":"ContainerStarted","Data":"681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14"} Oct 11 07:58:32 crc kubenswrapper[5016]: I1011 07:58:32.422450 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5907d55e-d11c-4aae-be25-381ad731178b","Type":"ContainerStarted","Data":"0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a"} Oct 11 07:58:32 crc kubenswrapper[5016]: I1011 07:58:32.451337 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.451321208 podStartE2EDuration="2.451321208s" podCreationTimestamp="2025-10-11 07:58:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:32.446390281 +0000 UTC m=+1100.346846277" watchObservedRunningTime="2025-10-11 07:58:32.451321208 +0000 UTC m=+1100.351777144" Oct 11 07:58:33 crc kubenswrapper[5016]: I1011 07:58:33.747783 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 11 07:58:35 crc kubenswrapper[5016]: I1011 07:58:35.147569 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 07:58:35 crc kubenswrapper[5016]: I1011 07:58:35.148016 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 07:58:35 crc kubenswrapper[5016]: I1011 07:58:35.718484 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 11 07:58:36 crc kubenswrapper[5016]: I1011 07:58:36.159795 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 07:58:36 crc kubenswrapper[5016]: I1011 07:58:36.159833 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 07:58:37 crc kubenswrapper[5016]: I1011 07:58:37.122537 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 07:58:37 crc kubenswrapper[5016]: I1011 07:58:37.122622 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 07:58:37 crc kubenswrapper[5016]: I1011 07:58:37.122705 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 07:58:37 crc kubenswrapper[5016]: I1011 07:58:37.123602 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0beaf8f3888f3224e77b273d2e7d0fa1af0b12ba8a490fbd46da42f1ed82abe"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 07:58:37 crc kubenswrapper[5016]: I1011 07:58:37.123733 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://e0beaf8f3888f3224e77b273d2e7d0fa1af0b12ba8a490fbd46da42f1ed82abe" gracePeriod=600 Oct 11 07:58:37 crc kubenswrapper[5016]: I1011 07:58:37.470136 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="e0beaf8f3888f3224e77b273d2e7d0fa1af0b12ba8a490fbd46da42f1ed82abe" exitCode=0 Oct 11 07:58:37 crc kubenswrapper[5016]: I1011 07:58:37.470344 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"e0beaf8f3888f3224e77b273d2e7d0fa1af0b12ba8a490fbd46da42f1ed82abe"} Oct 11 07:58:37 crc kubenswrapper[5016]: I1011 07:58:37.470544 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"6e27921f9485ad7dd5682c9472508ee14b957ef603b8b328e13450c313534ce6"} Oct 11 07:58:37 crc kubenswrapper[5016]: I1011 07:58:37.470590 5016 scope.go:117] "RemoveContainer" containerID="265caf0315ed7d9cc490abb97692bb40c37bc7e9af0dd0d10a990157231f7f84" Oct 11 07:58:38 crc kubenswrapper[5016]: I1011 07:58:38.748429 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 11 07:58:38 crc kubenswrapper[5016]: I1011 07:58:38.792211 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 11 07:58:39 crc kubenswrapper[5016]: I1011 07:58:39.526979 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 11 07:58:40 crc kubenswrapper[5016]: I1011 07:58:40.785469 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 07:58:40 crc kubenswrapper[5016]: I1011 07:58:40.786890 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 07:58:41 crc kubenswrapper[5016]: I1011 07:58:41.867994 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 07:58:41 crc kubenswrapper[5016]: I1011 07:58:41.867992 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 07:58:45 crc kubenswrapper[5016]: I1011 07:58:45.152518 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 07:58:45 crc kubenswrapper[5016]: I1011 07:58:45.153102 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 07:58:45 crc kubenswrapper[5016]: I1011 07:58:45.160584 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 07:58:45 crc kubenswrapper[5016]: I1011 07:58:45.165308 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.561003 5016 generic.go:334] "Generic (PLEG): container finished" podID="a164b940-476a-412d-aca9-4bf6b718d6c8" containerID="c00af0dadf80771a39f4f37ea4041dc4bdb8131ca6f8cd7e7c56134e849167c8" exitCode=137 Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.561096 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a164b940-476a-412d-aca9-4bf6b718d6c8","Type":"ContainerDied","Data":"c00af0dadf80771a39f4f37ea4041dc4bdb8131ca6f8cd7e7c56134e849167c8"} Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.561413 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a164b940-476a-412d-aca9-4bf6b718d6c8","Type":"ContainerDied","Data":"938510cf3eae542aa8eb42f0e612bfa70bd8882c4b382931ea5b6d5e124919ee"} Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.561432 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="938510cf3eae542aa8eb42f0e612bfa70bd8882c4b382931ea5b6d5e124919ee" Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.603494 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.719464 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq7mt\" (UniqueName: \"kubernetes.io/projected/a164b940-476a-412d-aca9-4bf6b718d6c8-kube-api-access-tq7mt\") pod \"a164b940-476a-412d-aca9-4bf6b718d6c8\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.719932 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-combined-ca-bundle\") pod \"a164b940-476a-412d-aca9-4bf6b718d6c8\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.720101 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-config-data\") pod \"a164b940-476a-412d-aca9-4bf6b718d6c8\" (UID: \"a164b940-476a-412d-aca9-4bf6b718d6c8\") " Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.725575 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a164b940-476a-412d-aca9-4bf6b718d6c8-kube-api-access-tq7mt" (OuterVolumeSpecName: "kube-api-access-tq7mt") pod "a164b940-476a-412d-aca9-4bf6b718d6c8" (UID: "a164b940-476a-412d-aca9-4bf6b718d6c8"). InnerVolumeSpecName "kube-api-access-tq7mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.746891 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-config-data" (OuterVolumeSpecName: "config-data") pod "a164b940-476a-412d-aca9-4bf6b718d6c8" (UID: "a164b940-476a-412d-aca9-4bf6b718d6c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.748070 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a164b940-476a-412d-aca9-4bf6b718d6c8" (UID: "a164b940-476a-412d-aca9-4bf6b718d6c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.821768 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq7mt\" (UniqueName: \"kubernetes.io/projected/a164b940-476a-412d-aca9-4bf6b718d6c8-kube-api-access-tq7mt\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.821899 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:46 crc kubenswrapper[5016]: I1011 07:58:46.821956 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a164b940-476a-412d-aca9-4bf6b718d6c8-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.571400 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.598037 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.609154 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.619362 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 07:58:47 crc kubenswrapper[5016]: E1011 07:58:47.619924 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a164b940-476a-412d-aca9-4bf6b718d6c8" containerName="nova-cell1-novncproxy-novncproxy" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.619943 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="a164b940-476a-412d-aca9-4bf6b718d6c8" containerName="nova-cell1-novncproxy-novncproxy" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.620162 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="a164b940-476a-412d-aca9-4bf6b718d6c8" containerName="nova-cell1-novncproxy-novncproxy" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.620984 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.629946 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.630112 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.630292 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.645512 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.750711 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676df\" (UniqueName: \"kubernetes.io/projected/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-kube-api-access-676df\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.751256 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.751398 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.751567 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.751922 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.853710 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.853902 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.855092 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-676df\" (UniqueName: \"kubernetes.io/projected/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-kube-api-access-676df\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.855416 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.855487 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.859345 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.866897 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.874733 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.874799 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.880032 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-676df\" (UniqueName: \"kubernetes.io/projected/3d3b41fc-1445-4c10-b8fb-62007ac44a8d-kube-api-access-676df\") pod \"nova-cell1-novncproxy-0\" (UID: \"3d3b41fc-1445-4c10-b8fb-62007ac44a8d\") " pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:47 crc kubenswrapper[5016]: I1011 07:58:47.948860 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:48 crc kubenswrapper[5016]: I1011 07:58:48.412364 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 11 07:58:48 crc kubenswrapper[5016]: W1011 07:58:48.413925 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d3b41fc_1445_4c10_b8fb_62007ac44a8d.slice/crio-05f0ac8da6a604e1ff69deb4960ad4e8fdbb552d49e8736fb903583a8f2fcb49 WatchSource:0}: Error finding container 05f0ac8da6a604e1ff69deb4960ad4e8fdbb552d49e8736fb903583a8f2fcb49: Status 404 returned error can't find the container with id 05f0ac8da6a604e1ff69deb4960ad4e8fdbb552d49e8736fb903583a8f2fcb49 Oct 11 07:58:48 crc kubenswrapper[5016]: I1011 07:58:48.580776 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3d3b41fc-1445-4c10-b8fb-62007ac44a8d","Type":"ContainerStarted","Data":"05f0ac8da6a604e1ff69deb4960ad4e8fdbb552d49e8736fb903583a8f2fcb49"} Oct 11 07:58:49 crc kubenswrapper[5016]: I1011 07:58:49.153856 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a164b940-476a-412d-aca9-4bf6b718d6c8" path="/var/lib/kubelet/pods/a164b940-476a-412d-aca9-4bf6b718d6c8/volumes" Oct 11 07:58:49 crc kubenswrapper[5016]: I1011 07:58:49.591290 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3d3b41fc-1445-4c10-b8fb-62007ac44a8d","Type":"ContainerStarted","Data":"e129303a6c3a26e566873ab594764ab2be23e17b9ac20ab1dc5daef098b36063"} Oct 11 07:58:49 crc kubenswrapper[5016]: I1011 07:58:49.623400 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.623368336 podStartE2EDuration="2.623368336s" podCreationTimestamp="2025-10-11 07:58:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:49.605310855 +0000 UTC m=+1117.505766831" watchObservedRunningTime="2025-10-11 07:58:49.623368336 +0000 UTC m=+1117.523824312" Oct 11 07:58:50 crc kubenswrapper[5016]: I1011 07:58:50.789242 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 07:58:50 crc kubenswrapper[5016]: I1011 07:58:50.790231 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 07:58:50 crc kubenswrapper[5016]: I1011 07:58:50.793308 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 07:58:50 crc kubenswrapper[5016]: I1011 07:58:50.795110 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.612066 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.616902 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.802246 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869677f947-82f6z"] Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.806726 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.811958 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869677f947-82f6z"] Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.937856 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-dns-svc\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.937954 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-nb\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.938084 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-config\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.938125 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k28h8\" (UniqueName: \"kubernetes.io/projected/18843252-f80a-450d-905c-f07e2bddddb0-kube-api-access-k28h8\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:51 crc kubenswrapper[5016]: I1011 07:58:51.938156 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-sb\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.040954 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-config\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.041023 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k28h8\" (UniqueName: \"kubernetes.io/projected/18843252-f80a-450d-905c-f07e2bddddb0-kube-api-access-k28h8\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.041048 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-sb\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.041114 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-dns-svc\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.041174 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-nb\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.042310 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-sb\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.042854 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-config\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.042895 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-nb\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.046643 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-dns-svc\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.080617 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k28h8\" (UniqueName: \"kubernetes.io/projected/18843252-f80a-450d-905c-f07e2bddddb0-kube-api-access-k28h8\") pod \"dnsmasq-dns-869677f947-82f6z\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.131731 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.669917 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869677f947-82f6z"] Oct 11 07:58:52 crc kubenswrapper[5016]: I1011 07:58:52.949827 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:53 crc kubenswrapper[5016]: I1011 07:58:53.648253 5016 generic.go:334] "Generic (PLEG): container finished" podID="18843252-f80a-450d-905c-f07e2bddddb0" containerID="a1f1439715e312d96b0cb9f472c53244e0d0f462b2b973300031d9c3fe171640" exitCode=0 Oct 11 07:58:53 crc kubenswrapper[5016]: I1011 07:58:53.648476 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869677f947-82f6z" event={"ID":"18843252-f80a-450d-905c-f07e2bddddb0","Type":"ContainerDied","Data":"a1f1439715e312d96b0cb9f472c53244e0d0f462b2b973300031d9c3fe171640"} Oct 11 07:58:53 crc kubenswrapper[5016]: I1011 07:58:53.648513 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869677f947-82f6z" event={"ID":"18843252-f80a-450d-905c-f07e2bddddb0","Type":"ContainerStarted","Data":"4f55afe6003b9fa0dd1f7e58aada853c9bea811365bdbc41ff9b42e37dbf6557"} Oct 11 07:58:53 crc kubenswrapper[5016]: I1011 07:58:53.890491 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:53 crc kubenswrapper[5016]: I1011 07:58:53.891215 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="proxy-httpd" containerID="cri-o://d903b91eb5574e286cc335c6052d30e5b03a6bd2c678c0775d5e4b3f0fb31ec0" gracePeriod=30 Oct 11 07:58:53 crc kubenswrapper[5016]: I1011 07:58:53.891337 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="sg-core" containerID="cri-o://0e7c731f38fa174e86ba1610cae56ba301bca4f65b5d16008ee3cbcdcaee4544" gracePeriod=30 Oct 11 07:58:53 crc kubenswrapper[5016]: I1011 07:58:53.891534 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="ceilometer-notification-agent" containerID="cri-o://17f9c9a3fe9f8aa9babcdeba9bd87e7da27c3f298176db8dccf7691277d213e0" gracePeriod=30 Oct 11 07:58:53 crc kubenswrapper[5016]: I1011 07:58:53.892366 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="ceilometer-central-agent" containerID="cri-o://47a583747ef098ea4ce6ca00d3df90db1342290b40a692c439485d2d910431d9" gracePeriod=30 Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.081704 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.656522 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869677f947-82f6z" event={"ID":"18843252-f80a-450d-905c-f07e2bddddb0","Type":"ContainerStarted","Data":"8d71fc3b80b4d818a14c1c21d00213bd170548b8b6b62a92c85a27a70859e79e"} Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.657691 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.660584 5016 generic.go:334] "Generic (PLEG): container finished" podID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerID="d903b91eb5574e286cc335c6052d30e5b03a6bd2c678c0775d5e4b3f0fb31ec0" exitCode=0 Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.660622 5016 generic.go:334] "Generic (PLEG): container finished" podID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerID="0e7c731f38fa174e86ba1610cae56ba301bca4f65b5d16008ee3cbcdcaee4544" exitCode=2 Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.660642 5016 generic.go:334] "Generic (PLEG): container finished" podID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerID="47a583747ef098ea4ce6ca00d3df90db1342290b40a692c439485d2d910431d9" exitCode=0 Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.660644 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerDied","Data":"d903b91eb5574e286cc335c6052d30e5b03a6bd2c678c0775d5e4b3f0fb31ec0"} Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.660737 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerDied","Data":"0e7c731f38fa174e86ba1610cae56ba301bca4f65b5d16008ee3cbcdcaee4544"} Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.660783 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerDied","Data":"47a583747ef098ea4ce6ca00d3df90db1342290b40a692c439485d2d910431d9"} Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.660924 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-log" containerID="cri-o://0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a" gracePeriod=30 Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.660954 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-api" containerID="cri-o://681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14" gracePeriod=30 Oct 11 07:58:54 crc kubenswrapper[5016]: I1011 07:58:54.690190 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869677f947-82f6z" podStartSLOduration=3.690172386 podStartE2EDuration="3.690172386s" podCreationTimestamp="2025-10-11 07:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:58:54.684426776 +0000 UTC m=+1122.584882732" watchObservedRunningTime="2025-10-11 07:58:54.690172386 +0000 UTC m=+1122.590628332" Oct 11 07:58:55 crc kubenswrapper[5016]: I1011 07:58:55.685039 5016 generic.go:334] "Generic (PLEG): container finished" podID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerID="17f9c9a3fe9f8aa9babcdeba9bd87e7da27c3f298176db8dccf7691277d213e0" exitCode=0 Oct 11 07:58:55 crc kubenswrapper[5016]: I1011 07:58:55.685115 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerDied","Data":"17f9c9a3fe9f8aa9babcdeba9bd87e7da27c3f298176db8dccf7691277d213e0"} Oct 11 07:58:55 crc kubenswrapper[5016]: I1011 07:58:55.689371 5016 generic.go:334] "Generic (PLEG): container finished" podID="5907d55e-d11c-4aae-be25-381ad731178b" containerID="0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a" exitCode=143 Oct 11 07:58:55 crc kubenswrapper[5016]: I1011 07:58:55.689453 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5907d55e-d11c-4aae-be25-381ad731178b","Type":"ContainerDied","Data":"0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a"} Oct 11 07:58:55 crc kubenswrapper[5016]: I1011 07:58:55.916781 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019168 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-ceilometer-tls-certs\") pod \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019201 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-combined-ca-bundle\") pod \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019246 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-run-httpd\") pod \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019264 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-config-data\") pod \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019331 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-scripts\") pod \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019403 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-log-httpd\") pod \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019421 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-sg-core-conf-yaml\") pod \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019450 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtpg4\" (UniqueName: \"kubernetes.io/projected/c7b65d12-aa4e-439c-8c40-af327ebe8c88-kube-api-access-jtpg4\") pod \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\" (UID: \"c7b65d12-aa4e-439c-8c40-af327ebe8c88\") " Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019567 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c7b65d12-aa4e-439c-8c40-af327ebe8c88" (UID: "c7b65d12-aa4e-439c-8c40-af327ebe8c88"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019887 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c7b65d12-aa4e-439c-8c40-af327ebe8c88" (UID: "c7b65d12-aa4e-439c-8c40-af327ebe8c88"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.019903 5016 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.025449 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-scripts" (OuterVolumeSpecName: "scripts") pod "c7b65d12-aa4e-439c-8c40-af327ebe8c88" (UID: "c7b65d12-aa4e-439c-8c40-af327ebe8c88"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.025826 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b65d12-aa4e-439c-8c40-af327ebe8c88-kube-api-access-jtpg4" (OuterVolumeSpecName: "kube-api-access-jtpg4") pod "c7b65d12-aa4e-439c-8c40-af327ebe8c88" (UID: "c7b65d12-aa4e-439c-8c40-af327ebe8c88"). InnerVolumeSpecName "kube-api-access-jtpg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.065259 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c7b65d12-aa4e-439c-8c40-af327ebe8c88" (UID: "c7b65d12-aa4e-439c-8c40-af327ebe8c88"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.093871 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c7b65d12-aa4e-439c-8c40-af327ebe8c88" (UID: "c7b65d12-aa4e-439c-8c40-af327ebe8c88"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.121285 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.121319 5016 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c7b65d12-aa4e-439c-8c40-af327ebe8c88-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.121328 5016 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.121339 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtpg4\" (UniqueName: \"kubernetes.io/projected/c7b65d12-aa4e-439c-8c40-af327ebe8c88-kube-api-access-jtpg4\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.121349 5016 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.174229 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-config-data" (OuterVolumeSpecName: "config-data") pod "c7b65d12-aa4e-439c-8c40-af327ebe8c88" (UID: "c7b65d12-aa4e-439c-8c40-af327ebe8c88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.174277 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7b65d12-aa4e-439c-8c40-af327ebe8c88" (UID: "c7b65d12-aa4e-439c-8c40-af327ebe8c88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.223311 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.223356 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b65d12-aa4e-439c-8c40-af327ebe8c88-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.700009 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c7b65d12-aa4e-439c-8c40-af327ebe8c88","Type":"ContainerDied","Data":"2d9fec95a60fc8dd0b823d1ca3ffc0fdc35fc94b440b86d80c3497e0a647acd0"} Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.700476 5016 scope.go:117] "RemoveContainer" containerID="d903b91eb5574e286cc335c6052d30e5b03a6bd2c678c0775d5e4b3f0fb31ec0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.700052 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.731513 5016 scope.go:117] "RemoveContainer" containerID="0e7c731f38fa174e86ba1610cae56ba301bca4f65b5d16008ee3cbcdcaee4544" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.732252 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.740231 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.761350 5016 scope.go:117] "RemoveContainer" containerID="17f9c9a3fe9f8aa9babcdeba9bd87e7da27c3f298176db8dccf7691277d213e0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.772960 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:56 crc kubenswrapper[5016]: E1011 07:58:56.773360 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="ceilometer-notification-agent" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.773378 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="ceilometer-notification-agent" Oct 11 07:58:56 crc kubenswrapper[5016]: E1011 07:58:56.773397 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="sg-core" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.773404 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="sg-core" Oct 11 07:58:56 crc kubenswrapper[5016]: E1011 07:58:56.773425 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="ceilometer-central-agent" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.773432 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="ceilometer-central-agent" Oct 11 07:58:56 crc kubenswrapper[5016]: E1011 07:58:56.773455 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="proxy-httpd" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.773463 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="proxy-httpd" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.773638 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="sg-core" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.773670 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="ceilometer-central-agent" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.773680 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="proxy-httpd" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.773692 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" containerName="ceilometer-notification-agent" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.775254 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.781521 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.784537 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.785174 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.787669 5016 scope.go:117] "RemoveContainer" containerID="47a583747ef098ea4ce6ca00d3df90db1342290b40a692c439485d2d910431d9" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.787792 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.833804 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-scripts\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.833861 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.834057 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-config-data\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.834126 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.834168 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rxd\" (UniqueName: \"kubernetes.io/projected/78274b80-0332-4e1a-8860-1e11cac32d0b-kube-api-access-m2rxd\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.834194 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-log-httpd\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.834331 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.834396 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-run-httpd\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.935986 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-run-httpd\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.936031 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-scripts\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.936060 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.936118 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-config-data\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.936151 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.936178 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2rxd\" (UniqueName: \"kubernetes.io/projected/78274b80-0332-4e1a-8860-1e11cac32d0b-kube-api-access-m2rxd\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.936198 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-log-httpd\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.936284 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.936725 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-run-httpd\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.936974 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-log-httpd\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.940892 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.941423 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-config-data\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.942151 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-scripts\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.942397 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.944237 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:56 crc kubenswrapper[5016]: I1011 07:58:56.954999 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2rxd\" (UniqueName: \"kubernetes.io/projected/78274b80-0332-4e1a-8860-1e11cac32d0b-kube-api-access-m2rxd\") pod \"ceilometer-0\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " pod="openstack/ceilometer-0" Oct 11 07:58:57 crc kubenswrapper[5016]: I1011 07:58:57.099847 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 07:58:57 crc kubenswrapper[5016]: I1011 07:58:57.148783 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b65d12-aa4e-439c-8c40-af327ebe8c88" path="/var/lib/kubelet/pods/c7b65d12-aa4e-439c-8c40-af327ebe8c88/volumes" Oct 11 07:58:57 crc kubenswrapper[5016]: I1011 07:58:57.597761 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 07:58:57 crc kubenswrapper[5016]: W1011 07:58:57.599251 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78274b80_0332_4e1a_8860_1e11cac32d0b.slice/crio-cf16bc5a862699816f3f2189fa8f6a9853dbc6ab7071634aa670f16993cc451a WatchSource:0}: Error finding container cf16bc5a862699816f3f2189fa8f6a9853dbc6ab7071634aa670f16993cc451a: Status 404 returned error can't find the container with id cf16bc5a862699816f3f2189fa8f6a9853dbc6ab7071634aa670f16993cc451a Oct 11 07:58:57 crc kubenswrapper[5016]: I1011 07:58:57.710126 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerStarted","Data":"cf16bc5a862699816f3f2189fa8f6a9853dbc6ab7071634aa670f16993cc451a"} Oct 11 07:58:57 crc kubenswrapper[5016]: I1011 07:58:57.949556 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:57 crc kubenswrapper[5016]: I1011 07:58:57.968053 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.188009 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.267933 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-config-data\") pod \"5907d55e-d11c-4aae-be25-381ad731178b\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.268323 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-combined-ca-bundle\") pod \"5907d55e-d11c-4aae-be25-381ad731178b\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.268493 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5907d55e-d11c-4aae-be25-381ad731178b-logs\") pod \"5907d55e-d11c-4aae-be25-381ad731178b\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.268533 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kkv6\" (UniqueName: \"kubernetes.io/projected/5907d55e-d11c-4aae-be25-381ad731178b-kube-api-access-5kkv6\") pod \"5907d55e-d11c-4aae-be25-381ad731178b\" (UID: \"5907d55e-d11c-4aae-be25-381ad731178b\") " Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.270074 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5907d55e-d11c-4aae-be25-381ad731178b-logs" (OuterVolumeSpecName: "logs") pod "5907d55e-d11c-4aae-be25-381ad731178b" (UID: "5907d55e-d11c-4aae-be25-381ad731178b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.273908 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5907d55e-d11c-4aae-be25-381ad731178b-kube-api-access-5kkv6" (OuterVolumeSpecName: "kube-api-access-5kkv6") pod "5907d55e-d11c-4aae-be25-381ad731178b" (UID: "5907d55e-d11c-4aae-be25-381ad731178b"). InnerVolumeSpecName "kube-api-access-5kkv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.302737 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5907d55e-d11c-4aae-be25-381ad731178b" (UID: "5907d55e-d11c-4aae-be25-381ad731178b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.311172 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-config-data" (OuterVolumeSpecName: "config-data") pod "5907d55e-d11c-4aae-be25-381ad731178b" (UID: "5907d55e-d11c-4aae-be25-381ad731178b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.376455 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.376496 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5907d55e-d11c-4aae-be25-381ad731178b-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.376526 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kkv6\" (UniqueName: \"kubernetes.io/projected/5907d55e-d11c-4aae-be25-381ad731178b-kube-api-access-5kkv6\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.376538 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5907d55e-d11c-4aae-be25-381ad731178b-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.735230 5016 generic.go:334] "Generic (PLEG): container finished" podID="5907d55e-d11c-4aae-be25-381ad731178b" containerID="681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14" exitCode=0 Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.735336 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.735374 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5907d55e-d11c-4aae-be25-381ad731178b","Type":"ContainerDied","Data":"681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14"} Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.737274 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5907d55e-d11c-4aae-be25-381ad731178b","Type":"ContainerDied","Data":"6ea803006811ab411d7a75019f3f554da469f6e1ceba67e377f531b0806dca0f"} Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.737304 5016 scope.go:117] "RemoveContainer" containerID="681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.739807 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerStarted","Data":"92024d431f4574f45061d05b12c61dbf348bf5828cce0c4008faabad01e42c65"} Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.765065 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.859100 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.879218 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.882861 5016 scope.go:117] "RemoveContainer" containerID="0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.889476 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:58 crc kubenswrapper[5016]: E1011 07:58:58.889942 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-api" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.889963 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-api" Oct 11 07:58:58 crc kubenswrapper[5016]: E1011 07:58:58.890002 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-log" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.890008 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-log" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.890165 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-api" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.890178 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="5907d55e-d11c-4aae-be25-381ad731178b" containerName="nova-api-log" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.891196 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.894204 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.894524 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.898841 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.906831 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.975427 5016 scope.go:117] "RemoveContainer" containerID="681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14" Oct 11 07:58:58 crc kubenswrapper[5016]: E1011 07:58:58.975809 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14\": container with ID starting with 681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14 not found: ID does not exist" containerID="681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.975849 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14"} err="failed to get container status \"681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14\": rpc error: code = NotFound desc = could not find container \"681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14\": container with ID starting with 681b4c747003a8c915c6c4579cabbca0d5ae274d6de8ca56626ecc33ddcb9a14 not found: ID does not exist" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.975875 5016 scope.go:117] "RemoveContainer" containerID="0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a" Oct 11 07:58:58 crc kubenswrapper[5016]: E1011 07:58:58.976164 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a\": container with ID starting with 0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a not found: ID does not exist" containerID="0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.976191 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a"} err="failed to get container status \"0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a\": rpc error: code = NotFound desc = could not find container \"0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a\": container with ID starting with 0792b74fdd86f7aca8328e33948a8b9af2e0531c7a9b03fb4af807930c6c7c3a not found: ID does not exist" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.993722 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-public-tls-certs\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.993790 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a49c6c4-0644-451a-b63c-eb97a20957d2-logs\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.993841 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfkh2\" (UniqueName: \"kubernetes.io/projected/4a49c6c4-0644-451a-b63c-eb97a20957d2-kube-api-access-vfkh2\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.993894 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-config-data\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.993945 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:58 crc kubenswrapper[5016]: I1011 07:58:58.993961 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.020183 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tgcng"] Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.022004 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.027186 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.027343 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.038310 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tgcng"] Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.095957 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.096035 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-config-data\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.096280 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.096332 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.096407 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-public-tls-certs\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.096504 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a49c6c4-0644-451a-b63c-eb97a20957d2-logs\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.096563 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-config-data\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.096595 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrfng\" (UniqueName: \"kubernetes.io/projected/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-kube-api-access-lrfng\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.096704 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfkh2\" (UniqueName: \"kubernetes.io/projected/4a49c6c4-0644-451a-b63c-eb97a20957d2-kube-api-access-vfkh2\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.096749 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-scripts\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.097086 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a49c6c4-0644-451a-b63c-eb97a20957d2-logs\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.100169 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-config-data\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.101817 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.108186 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-public-tls-certs\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.109302 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.111741 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfkh2\" (UniqueName: \"kubernetes.io/projected/4a49c6c4-0644-451a-b63c-eb97a20957d2-kube-api-access-vfkh2\") pod \"nova-api-0\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.145251 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5907d55e-d11c-4aae-be25-381ad731178b" path="/var/lib/kubelet/pods/5907d55e-d11c-4aae-be25-381ad731178b/volumes" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.198206 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-config-data\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.198255 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrfng\" (UniqueName: \"kubernetes.io/projected/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-kube-api-access-lrfng\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.198301 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-scripts\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.198331 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.201757 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-scripts\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.202561 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-config-data\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.202886 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.215290 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrfng\" (UniqueName: \"kubernetes.io/projected/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-kube-api-access-lrfng\") pod \"nova-cell1-cell-mapping-tgcng\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.292394 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.339146 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.753732 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerStarted","Data":"3a35cb7c8bba2d3bf33f49611885969e9937c21cb3de207e7b3d18ca25f72ffc"} Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.754056 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerStarted","Data":"704a25afd183f6a3a963cac10809ee59a9ddedbca1e75e2cf35f2279af182b8c"} Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.773219 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tgcng"] Oct 11 07:58:59 crc kubenswrapper[5016]: I1011 07:58:59.854391 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:58:59 crc kubenswrapper[5016]: W1011 07:58:59.862190 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a49c6c4_0644_451a_b63c_eb97a20957d2.slice/crio-64e56e5458436c2995935b540935c95dad9f0cf3d4e48a9c0b28fbf06a40ddb3 WatchSource:0}: Error finding container 64e56e5458436c2995935b540935c95dad9f0cf3d4e48a9c0b28fbf06a40ddb3: Status 404 returned error can't find the container with id 64e56e5458436c2995935b540935c95dad9f0cf3d4e48a9c0b28fbf06a40ddb3 Oct 11 07:59:00 crc kubenswrapper[5016]: I1011 07:59:00.762705 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tgcng" event={"ID":"2ee09e0c-61a1-446f-b2d8-d74cd60e3152","Type":"ContainerStarted","Data":"53bd2136c9cb244abc20f388e52414168c8c2b1afe6e5cc4ae9ee7e376b011cf"} Oct 11 07:59:00 crc kubenswrapper[5016]: I1011 07:59:00.763571 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tgcng" event={"ID":"2ee09e0c-61a1-446f-b2d8-d74cd60e3152","Type":"ContainerStarted","Data":"64c11e048a4fd005e8ae4622462a05c80995c377b3cd7e408a9fa5f913c5d07d"} Oct 11 07:59:00 crc kubenswrapper[5016]: I1011 07:59:00.767964 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a49c6c4-0644-451a-b63c-eb97a20957d2","Type":"ContainerStarted","Data":"e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b"} Oct 11 07:59:00 crc kubenswrapper[5016]: I1011 07:59:00.768216 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a49c6c4-0644-451a-b63c-eb97a20957d2","Type":"ContainerStarted","Data":"6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f"} Oct 11 07:59:00 crc kubenswrapper[5016]: I1011 07:59:00.768345 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a49c6c4-0644-451a-b63c-eb97a20957d2","Type":"ContainerStarted","Data":"64e56e5458436c2995935b540935c95dad9f0cf3d4e48a9c0b28fbf06a40ddb3"} Oct 11 07:59:00 crc kubenswrapper[5016]: I1011 07:59:00.784021 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tgcng" podStartSLOduration=2.783989912 podStartE2EDuration="2.783989912s" podCreationTimestamp="2025-10-11 07:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:59:00.779352171 +0000 UTC m=+1128.679808117" watchObservedRunningTime="2025-10-11 07:59:00.783989912 +0000 UTC m=+1128.684445898" Oct 11 07:59:00 crc kubenswrapper[5016]: I1011 07:59:00.818682 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.818642137 podStartE2EDuration="2.818642137s" podCreationTimestamp="2025-10-11 07:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:59:00.81457777 +0000 UTC m=+1128.715033706" watchObservedRunningTime="2025-10-11 07:59:00.818642137 +0000 UTC m=+1128.719098083" Oct 11 07:59:01 crc kubenswrapper[5016]: I1011 07:59:01.779249 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerStarted","Data":"24fc6c4453b265fd03d88a0a5b1646dc1744c36275020280067fe86ea048dded"} Oct 11 07:59:01 crc kubenswrapper[5016]: I1011 07:59:01.809226 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.494161935 podStartE2EDuration="5.809208132s" podCreationTimestamp="2025-10-11 07:58:56 +0000 UTC" firstStartedPulling="2025-10-11 07:58:57.601465504 +0000 UTC m=+1125.501921450" lastFinishedPulling="2025-10-11 07:59:00.916511701 +0000 UTC m=+1128.816967647" observedRunningTime="2025-10-11 07:59:01.805202608 +0000 UTC m=+1129.705658554" watchObservedRunningTime="2025-10-11 07:59:01.809208132 +0000 UTC m=+1129.709664068" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.133623 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.207604 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54974c8ff5-6tx6j"] Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.207992 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" podUID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" containerName="dnsmasq-dns" containerID="cri-o://95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8" gracePeriod=10 Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.726303 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.778477 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-dns-svc\") pod \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.779482 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-config\") pod \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.779695 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5kcq\" (UniqueName: \"kubernetes.io/projected/91e60a92-017b-4e6f-99c2-4afce0c72bbc-kube-api-access-q5kcq\") pod \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.779803 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-sb\") pod \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.779870 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-nb\") pod \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\" (UID: \"91e60a92-017b-4e6f-99c2-4afce0c72bbc\") " Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.795695 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e60a92-017b-4e6f-99c2-4afce0c72bbc-kube-api-access-q5kcq" (OuterVolumeSpecName: "kube-api-access-q5kcq") pod "91e60a92-017b-4e6f-99c2-4afce0c72bbc" (UID: "91e60a92-017b-4e6f-99c2-4afce0c72bbc"). InnerVolumeSpecName "kube-api-access-q5kcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.800891 5016 generic.go:334] "Generic (PLEG): container finished" podID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" containerID="95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8" exitCode=0 Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.800946 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.801010 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" event={"ID":"91e60a92-017b-4e6f-99c2-4afce0c72bbc","Type":"ContainerDied","Data":"95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8"} Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.801041 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" event={"ID":"91e60a92-017b-4e6f-99c2-4afce0c72bbc","Type":"ContainerDied","Data":"c7a49ad00b30420bf1510374fb79f42d05b3fea8e350fcf403748267dadd0103"} Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.801062 5016 scope.go:117] "RemoveContainer" containerID="95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.801406 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.837235 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91e60a92-017b-4e6f-99c2-4afce0c72bbc" (UID: "91e60a92-017b-4e6f-99c2-4afce0c72bbc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.839058 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91e60a92-017b-4e6f-99c2-4afce0c72bbc" (UID: "91e60a92-017b-4e6f-99c2-4afce0c72bbc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.855642 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-config" (OuterVolumeSpecName: "config") pod "91e60a92-017b-4e6f-99c2-4afce0c72bbc" (UID: "91e60a92-017b-4e6f-99c2-4afce0c72bbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.872202 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91e60a92-017b-4e6f-99c2-4afce0c72bbc" (UID: "91e60a92-017b-4e6f-99c2-4afce0c72bbc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.882049 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5kcq\" (UniqueName: \"kubernetes.io/projected/91e60a92-017b-4e6f-99c2-4afce0c72bbc-kube-api-access-q5kcq\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.882130 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.882141 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.882149 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.882159 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91e60a92-017b-4e6f-99c2-4afce0c72bbc-config\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.928071 5016 scope.go:117] "RemoveContainer" containerID="35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.953969 5016 scope.go:117] "RemoveContainer" containerID="95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8" Oct 11 07:59:02 crc kubenswrapper[5016]: E1011 07:59:02.954551 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8\": container with ID starting with 95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8 not found: ID does not exist" containerID="95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.954801 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8"} err="failed to get container status \"95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8\": rpc error: code = NotFound desc = could not find container \"95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8\": container with ID starting with 95355a2b36e1160c440cd9d2867a20c4de6a5e805254b242dc824410ec2cb4f8 not found: ID does not exist" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.954911 5016 scope.go:117] "RemoveContainer" containerID="35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888" Oct 11 07:59:02 crc kubenswrapper[5016]: E1011 07:59:02.955805 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888\": container with ID starting with 35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888 not found: ID does not exist" containerID="35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888" Oct 11 07:59:02 crc kubenswrapper[5016]: I1011 07:59:02.955874 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888"} err="failed to get container status \"35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888\": rpc error: code = NotFound desc = could not find container \"35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888\": container with ID starting with 35f9d420a3fcb087863ca13dd927f6aaa7f50649dd5705bba5cc0950d13be888 not found: ID does not exist" Oct 11 07:59:03 crc kubenswrapper[5016]: I1011 07:59:03.147890 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54974c8ff5-6tx6j"] Oct 11 07:59:03 crc kubenswrapper[5016]: I1011 07:59:03.152108 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54974c8ff5-6tx6j"] Oct 11 07:59:05 crc kubenswrapper[5016]: I1011 07:59:05.148110 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" path="/var/lib/kubelet/pods/91e60a92-017b-4e6f-99c2-4afce0c72bbc/volumes" Oct 11 07:59:05 crc kubenswrapper[5016]: I1011 07:59:05.829022 5016 generic.go:334] "Generic (PLEG): container finished" podID="2ee09e0c-61a1-446f-b2d8-d74cd60e3152" containerID="53bd2136c9cb244abc20f388e52414168c8c2b1afe6e5cc4ae9ee7e376b011cf" exitCode=0 Oct 11 07:59:05 crc kubenswrapper[5016]: I1011 07:59:05.829078 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tgcng" event={"ID":"2ee09e0c-61a1-446f-b2d8-d74cd60e3152","Type":"ContainerDied","Data":"53bd2136c9cb244abc20f388e52414168c8c2b1afe6e5cc4ae9ee7e376b011cf"} Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.218402 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.267409 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrfng\" (UniqueName: \"kubernetes.io/projected/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-kube-api-access-lrfng\") pod \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.267544 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-combined-ca-bundle\") pod \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.267637 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-config-data\") pod \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.267746 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-scripts\") pod \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\" (UID: \"2ee09e0c-61a1-446f-b2d8-d74cd60e3152\") " Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.275974 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-kube-api-access-lrfng" (OuterVolumeSpecName: "kube-api-access-lrfng") pod "2ee09e0c-61a1-446f-b2d8-d74cd60e3152" (UID: "2ee09e0c-61a1-446f-b2d8-d74cd60e3152"). InnerVolumeSpecName "kube-api-access-lrfng". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.276797 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-scripts" (OuterVolumeSpecName: "scripts") pod "2ee09e0c-61a1-446f-b2d8-d74cd60e3152" (UID: "2ee09e0c-61a1-446f-b2d8-d74cd60e3152"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.302274 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-config-data" (OuterVolumeSpecName: "config-data") pod "2ee09e0c-61a1-446f-b2d8-d74cd60e3152" (UID: "2ee09e0c-61a1-446f-b2d8-d74cd60e3152"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.305593 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ee09e0c-61a1-446f-b2d8-d74cd60e3152" (UID: "2ee09e0c-61a1-446f-b2d8-d74cd60e3152"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.369982 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrfng\" (UniqueName: \"kubernetes.io/projected/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-kube-api-access-lrfng\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.370174 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.370255 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.370429 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ee09e0c-61a1-446f-b2d8-d74cd60e3152-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.639870 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54974c8ff5-6tx6j" podUID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.179:5353: i/o timeout" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.848957 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tgcng" Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.849147 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tgcng" event={"ID":"2ee09e0c-61a1-446f-b2d8-d74cd60e3152","Type":"ContainerDied","Data":"64c11e048a4fd005e8ae4622462a05c80995c377b3cd7e408a9fa5f913c5d07d"} Oct 11 07:59:07 crc kubenswrapper[5016]: I1011 07:59:07.849726 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64c11e048a4fd005e8ae4622462a05c80995c377b3cd7e408a9fa5f913c5d07d" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.046300 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.046604 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerName="nova-api-log" containerID="cri-o://6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f" gracePeriod=30 Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.046631 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerName="nova-api-api" containerID="cri-o://e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b" gracePeriod=30 Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.067702 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.068015 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ca94bc65-ebb1-4ccb-bff6-7645d39255fd" containerName="nova-scheduler-scheduler" containerID="cri-o://2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654" gracePeriod=30 Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.082422 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.082805 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-log" containerID="cri-o://6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08" gracePeriod=30 Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.083003 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-metadata" containerID="cri-o://4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be" gracePeriod=30 Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.631946 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.694962 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a49c6c4-0644-451a-b63c-eb97a20957d2-logs\") pod \"4a49c6c4-0644-451a-b63c-eb97a20957d2\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.695104 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-config-data\") pod \"4a49c6c4-0644-451a-b63c-eb97a20957d2\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.695151 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-combined-ca-bundle\") pod \"4a49c6c4-0644-451a-b63c-eb97a20957d2\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.695292 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-internal-tls-certs\") pod \"4a49c6c4-0644-451a-b63c-eb97a20957d2\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.695331 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-public-tls-certs\") pod \"4a49c6c4-0644-451a-b63c-eb97a20957d2\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.695372 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfkh2\" (UniqueName: \"kubernetes.io/projected/4a49c6c4-0644-451a-b63c-eb97a20957d2-kube-api-access-vfkh2\") pod \"4a49c6c4-0644-451a-b63c-eb97a20957d2\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.695512 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a49c6c4-0644-451a-b63c-eb97a20957d2-logs" (OuterVolumeSpecName: "logs") pod "4a49c6c4-0644-451a-b63c-eb97a20957d2" (UID: "4a49c6c4-0644-451a-b63c-eb97a20957d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.695903 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a49c6c4-0644-451a-b63c-eb97a20957d2-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.703090 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a49c6c4-0644-451a-b63c-eb97a20957d2-kube-api-access-vfkh2" (OuterVolumeSpecName: "kube-api-access-vfkh2") pod "4a49c6c4-0644-451a-b63c-eb97a20957d2" (UID: "4a49c6c4-0644-451a-b63c-eb97a20957d2"). InnerVolumeSpecName "kube-api-access-vfkh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.722540 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a49c6c4-0644-451a-b63c-eb97a20957d2" (UID: "4a49c6c4-0644-451a-b63c-eb97a20957d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.733154 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-config-data" (OuterVolumeSpecName: "config-data") pod "4a49c6c4-0644-451a-b63c-eb97a20957d2" (UID: "4a49c6c4-0644-451a-b63c-eb97a20957d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:08 crc kubenswrapper[5016]: E1011 07:59:08.748895 5016 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-public-tls-certs podName:4a49c6c4-0644-451a-b63c-eb97a20957d2 nodeName:}" failed. No retries permitted until 2025-10-11 07:59:09.248858335 +0000 UTC m=+1137.149314281 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "public-tls-certs" (UniqueName: "kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-public-tls-certs") pod "4a49c6c4-0644-451a-b63c-eb97a20957d2" (UID: "4a49c6c4-0644-451a-b63c-eb97a20957d2") : error deleting /var/lib/kubelet/pods/4a49c6c4-0644-451a-b63c-eb97a20957d2/volume-subpaths: remove /var/lib/kubelet/pods/4a49c6c4-0644-451a-b63c-eb97a20957d2/volume-subpaths: no such file or directory Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.751030 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4a49c6c4-0644-451a-b63c-eb97a20957d2" (UID: "4a49c6c4-0644-451a-b63c-eb97a20957d2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:08 crc kubenswrapper[5016]: E1011 07:59:08.752807 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 07:59:08 crc kubenswrapper[5016]: E1011 07:59:08.760526 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 07:59:08 crc kubenswrapper[5016]: E1011 07:59:08.762377 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 11 07:59:08 crc kubenswrapper[5016]: E1011 07:59:08.762447 5016 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ca94bc65-ebb1-4ccb-bff6-7645d39255fd" containerName="nova-scheduler-scheduler" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.797099 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.797130 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.797141 5016 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.797150 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfkh2\" (UniqueName: \"kubernetes.io/projected/4a49c6c4-0644-451a-b63c-eb97a20957d2-kube-api-access-vfkh2\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.859408 5016 generic.go:334] "Generic (PLEG): container finished" podID="862c465e-619c-4fed-adf2-fe7d93b46937" containerID="6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08" exitCode=143 Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.859486 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862c465e-619c-4fed-adf2-fe7d93b46937","Type":"ContainerDied","Data":"6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08"} Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.861533 5016 generic.go:334] "Generic (PLEG): container finished" podID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerID="e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b" exitCode=0 Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.861562 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.861571 5016 generic.go:334] "Generic (PLEG): container finished" podID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerID="6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f" exitCode=143 Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.861612 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a49c6c4-0644-451a-b63c-eb97a20957d2","Type":"ContainerDied","Data":"e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b"} Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.861717 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a49c6c4-0644-451a-b63c-eb97a20957d2","Type":"ContainerDied","Data":"6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f"} Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.861731 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a49c6c4-0644-451a-b63c-eb97a20957d2","Type":"ContainerDied","Data":"64e56e5458436c2995935b540935c95dad9f0cf3d4e48a9c0b28fbf06a40ddb3"} Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.861749 5016 scope.go:117] "RemoveContainer" containerID="e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.895841 5016 scope.go:117] "RemoveContainer" containerID="6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.933573 5016 scope.go:117] "RemoveContainer" containerID="e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b" Oct 11 07:59:08 crc kubenswrapper[5016]: E1011 07:59:08.933975 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b\": container with ID starting with e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b not found: ID does not exist" containerID="e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.934011 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b"} err="failed to get container status \"e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b\": rpc error: code = NotFound desc = could not find container \"e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b\": container with ID starting with e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b not found: ID does not exist" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.934034 5016 scope.go:117] "RemoveContainer" containerID="6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f" Oct 11 07:59:08 crc kubenswrapper[5016]: E1011 07:59:08.934406 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f\": container with ID starting with 6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f not found: ID does not exist" containerID="6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.934432 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f"} err="failed to get container status \"6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f\": rpc error: code = NotFound desc = could not find container \"6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f\": container with ID starting with 6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f not found: ID does not exist" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.934465 5016 scope.go:117] "RemoveContainer" containerID="e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.935162 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b"} err="failed to get container status \"e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b\": rpc error: code = NotFound desc = could not find container \"e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b\": container with ID starting with e72a23cdb139cbc1be54ec86fe0390f8b409f59c0fc24941882f96dbc4dde65b not found: ID does not exist" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.935189 5016 scope.go:117] "RemoveContainer" containerID="6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f" Oct 11 07:59:08 crc kubenswrapper[5016]: I1011 07:59:08.935428 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f"} err="failed to get container status \"6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f\": rpc error: code = NotFound desc = could not find container \"6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f\": container with ID starting with 6f769b2f9efdee438201c07dda6b85f94e6e6b96df20ae463dc1d71b639dce6f not found: ID does not exist" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.306369 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-public-tls-certs\") pod \"4a49c6c4-0644-451a-b63c-eb97a20957d2\" (UID: \"4a49c6c4-0644-451a-b63c-eb97a20957d2\") " Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.314055 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4a49c6c4-0644-451a-b63c-eb97a20957d2" (UID: "4a49c6c4-0644-451a-b63c-eb97a20957d2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.409297 5016 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a49c6c4-0644-451a-b63c-eb97a20957d2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.504064 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.513791 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.526357 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 11 07:59:09 crc kubenswrapper[5016]: E1011 07:59:09.526898 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" containerName="dnsmasq-dns" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.526925 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" containerName="dnsmasq-dns" Oct 11 07:59:09 crc kubenswrapper[5016]: E1011 07:59:09.526971 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee09e0c-61a1-446f-b2d8-d74cd60e3152" containerName="nova-manage" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.526982 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee09e0c-61a1-446f-b2d8-d74cd60e3152" containerName="nova-manage" Oct 11 07:59:09 crc kubenswrapper[5016]: E1011 07:59:09.527003 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerName="nova-api-api" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.527014 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerName="nova-api-api" Oct 11 07:59:09 crc kubenswrapper[5016]: E1011 07:59:09.527029 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" containerName="init" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.527039 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" containerName="init" Oct 11 07:59:09 crc kubenswrapper[5016]: E1011 07:59:09.527059 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerName="nova-api-log" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.527070 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerName="nova-api-log" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.527347 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerName="nova-api-api" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.527373 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a49c6c4-0644-451a-b63c-eb97a20957d2" containerName="nova-api-log" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.527393 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e60a92-017b-4e6f-99c2-4afce0c72bbc" containerName="dnsmasq-dns" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.527413 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee09e0c-61a1-446f-b2d8-d74cd60e3152" containerName="nova-manage" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.528966 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.530975 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.531225 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.531637 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.546979 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.612147 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ace03b9-7f45-49ca-ac24-3401d9820d71-logs\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.612337 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.612402 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.612469 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-public-tls-certs\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.612530 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-config-data\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.612639 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4pl\" (UniqueName: \"kubernetes.io/projected/9ace03b9-7f45-49ca-ac24-3401d9820d71-kube-api-access-vn4pl\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.714185 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-config-data\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.714283 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn4pl\" (UniqueName: \"kubernetes.io/projected/9ace03b9-7f45-49ca-ac24-3401d9820d71-kube-api-access-vn4pl\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.714328 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ace03b9-7f45-49ca-ac24-3401d9820d71-logs\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.714384 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.714404 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.714429 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-public-tls-certs\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.715045 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ace03b9-7f45-49ca-ac24-3401d9820d71-logs\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.718123 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-public-tls-certs\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.718331 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.718628 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.726598 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ace03b9-7f45-49ca-ac24-3401d9820d71-config-data\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.732175 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn4pl\" (UniqueName: \"kubernetes.io/projected/9ace03b9-7f45-49ca-ac24-3401d9820d71-kube-api-access-vn4pl\") pod \"nova-api-0\" (UID: \"9ace03b9-7f45-49ca-ac24-3401d9820d71\") " pod="openstack/nova-api-0" Oct 11 07:59:09 crc kubenswrapper[5016]: I1011 07:59:09.856872 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 11 07:59:10 crc kubenswrapper[5016]: I1011 07:59:10.329924 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 11 07:59:10 crc kubenswrapper[5016]: I1011 07:59:10.882953 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ace03b9-7f45-49ca-ac24-3401d9820d71","Type":"ContainerStarted","Data":"e43052fe9a1ee892ca974c05057118c50daee392f5ba6342d2c9d467269349a1"} Oct 11 07:59:10 crc kubenswrapper[5016]: I1011 07:59:10.883335 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ace03b9-7f45-49ca-ac24-3401d9820d71","Type":"ContainerStarted","Data":"e65bd739c8a20c7e34a2b7af5b0cf26836ccf13d4644d3b8b52e3ce2485521b5"} Oct 11 07:59:10 crc kubenswrapper[5016]: I1011 07:59:10.883355 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ace03b9-7f45-49ca-ac24-3401d9820d71","Type":"ContainerStarted","Data":"56c982cf9ecd32c3ceb40b2d0eb47bf81205c24b3d3ce3907d2ca4beb1427175"} Oct 11 07:59:10 crc kubenswrapper[5016]: I1011 07:59:10.907537 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.907506929 podStartE2EDuration="1.907506929s" podCreationTimestamp="2025-10-11 07:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:59:10.902645902 +0000 UTC m=+1138.803101888" watchObservedRunningTime="2025-10-11 07:59:10.907506929 +0000 UTC m=+1138.807962915" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.171912 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a49c6c4-0644-451a-b63c-eb97a20957d2" path="/var/lib/kubelet/pods/4a49c6c4-0644-451a-b63c-eb97a20957d2/volumes" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.187817 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": read tcp 10.217.0.2:35788->10.217.0.182:8775: read: connection reset by peer" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.187847 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": read tcp 10.217.0.2:35790->10.217.0.182:8775: read: connection reset by peer" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.567710 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.665367 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-combined-ca-bundle\") pod \"862c465e-619c-4fed-adf2-fe7d93b46937\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.665539 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862c465e-619c-4fed-adf2-fe7d93b46937-logs\") pod \"862c465e-619c-4fed-adf2-fe7d93b46937\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.665606 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-nova-metadata-tls-certs\") pod \"862c465e-619c-4fed-adf2-fe7d93b46937\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.665686 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-config-data\") pod \"862c465e-619c-4fed-adf2-fe7d93b46937\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.665731 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgfnv\" (UniqueName: \"kubernetes.io/projected/862c465e-619c-4fed-adf2-fe7d93b46937-kube-api-access-qgfnv\") pod \"862c465e-619c-4fed-adf2-fe7d93b46937\" (UID: \"862c465e-619c-4fed-adf2-fe7d93b46937\") " Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.667641 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862c465e-619c-4fed-adf2-fe7d93b46937-logs" (OuterVolumeSpecName: "logs") pod "862c465e-619c-4fed-adf2-fe7d93b46937" (UID: "862c465e-619c-4fed-adf2-fe7d93b46937"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.675763 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862c465e-619c-4fed-adf2-fe7d93b46937-kube-api-access-qgfnv" (OuterVolumeSpecName: "kube-api-access-qgfnv") pod "862c465e-619c-4fed-adf2-fe7d93b46937" (UID: "862c465e-619c-4fed-adf2-fe7d93b46937"). InnerVolumeSpecName "kube-api-access-qgfnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.696543 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "862c465e-619c-4fed-adf2-fe7d93b46937" (UID: "862c465e-619c-4fed-adf2-fe7d93b46937"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.722826 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-config-data" (OuterVolumeSpecName: "config-data") pod "862c465e-619c-4fed-adf2-fe7d93b46937" (UID: "862c465e-619c-4fed-adf2-fe7d93b46937"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.763009 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "862c465e-619c-4fed-adf2-fe7d93b46937" (UID: "862c465e-619c-4fed-adf2-fe7d93b46937"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.768531 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.768885 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862c465e-619c-4fed-adf2-fe7d93b46937-logs\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.769012 5016 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.769128 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862c465e-619c-4fed-adf2-fe7d93b46937-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.769254 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgfnv\" (UniqueName: \"kubernetes.io/projected/862c465e-619c-4fed-adf2-fe7d93b46937-kube-api-access-qgfnv\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.896584 5016 generic.go:334] "Generic (PLEG): container finished" podID="862c465e-619c-4fed-adf2-fe7d93b46937" containerID="4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be" exitCode=0 Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.896711 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.896776 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862c465e-619c-4fed-adf2-fe7d93b46937","Type":"ContainerDied","Data":"4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be"} Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.896840 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"862c465e-619c-4fed-adf2-fe7d93b46937","Type":"ContainerDied","Data":"f7217f8fcad0f3f4a6f5b136ae736ecc5815c31784ad12a0319db1c2becf22fc"} Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.896868 5016 scope.go:117] "RemoveContainer" containerID="4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.926154 5016 scope.go:117] "RemoveContainer" containerID="6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.936018 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.949049 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.951800 5016 scope.go:117] "RemoveContainer" containerID="4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be" Oct 11 07:59:11 crc kubenswrapper[5016]: E1011 07:59:11.952359 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be\": container with ID starting with 4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be not found: ID does not exist" containerID="4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.952472 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be"} err="failed to get container status \"4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be\": rpc error: code = NotFound desc = could not find container \"4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be\": container with ID starting with 4c718ea73e842115ba89de24d48806b05945b3373ed56d28edbea49bf3e068be not found: ID does not exist" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.952830 5016 scope.go:117] "RemoveContainer" containerID="6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08" Oct 11 07:59:11 crc kubenswrapper[5016]: E1011 07:59:11.953231 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08\": container with ID starting with 6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08 not found: ID does not exist" containerID="6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.953258 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08"} err="failed to get container status \"6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08\": rpc error: code = NotFound desc = could not find container \"6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08\": container with ID starting with 6c294810a2daf32d4ea8d5d2c1ea85b7380614122ea19e5a6147d70b9f55aa08 not found: ID does not exist" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.967806 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:59:11 crc kubenswrapper[5016]: E1011 07:59:11.968167 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-log" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.968182 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-log" Oct 11 07:59:11 crc kubenswrapper[5016]: E1011 07:59:11.968211 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-metadata" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.968218 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-metadata" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.968368 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-log" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.968385 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" containerName="nova-metadata-metadata" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.969288 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.971352 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.971441 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 11 07:59:11 crc kubenswrapper[5016]: I1011 07:59:11.976006 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.078678 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e182b619-d220-435a-80ed-74611b49f193-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.078729 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e182b619-d220-435a-80ed-74611b49f193-config-data\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.078790 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e182b619-d220-435a-80ed-74611b49f193-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.078833 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e182b619-d220-435a-80ed-74611b49f193-logs\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.078881 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58pgm\" (UniqueName: \"kubernetes.io/projected/e182b619-d220-435a-80ed-74611b49f193-kube-api-access-58pgm\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.179947 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e182b619-d220-435a-80ed-74611b49f193-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.180306 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e182b619-d220-435a-80ed-74611b49f193-config-data\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.180379 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e182b619-d220-435a-80ed-74611b49f193-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.180427 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e182b619-d220-435a-80ed-74611b49f193-logs\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.180498 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58pgm\" (UniqueName: \"kubernetes.io/projected/e182b619-d220-435a-80ed-74611b49f193-kube-api-access-58pgm\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.181208 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e182b619-d220-435a-80ed-74611b49f193-logs\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.185595 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e182b619-d220-435a-80ed-74611b49f193-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.185802 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e182b619-d220-435a-80ed-74611b49f193-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.190396 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e182b619-d220-435a-80ed-74611b49f193-config-data\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.200935 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58pgm\" (UniqueName: \"kubernetes.io/projected/e182b619-d220-435a-80ed-74611b49f193-kube-api-access-58pgm\") pod \"nova-metadata-0\" (UID: \"e182b619-d220-435a-80ed-74611b49f193\") " pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.290560 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.804847 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.879256 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.924777 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e182b619-d220-435a-80ed-74611b49f193","Type":"ContainerStarted","Data":"485ab2d28ac2e22ad9d77e99d848362a005a04fbdcc600205631c4899bc2bb3d"} Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.926722 5016 generic.go:334] "Generic (PLEG): container finished" podID="ca94bc65-ebb1-4ccb-bff6-7645d39255fd" containerID="2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654" exitCode=0 Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.926761 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ca94bc65-ebb1-4ccb-bff6-7645d39255fd","Type":"ContainerDied","Data":"2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654"} Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.926792 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.926815 5016 scope.go:117] "RemoveContainer" containerID="2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.926800 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ca94bc65-ebb1-4ccb-bff6-7645d39255fd","Type":"ContainerDied","Data":"9f0b8cd9899e31a4802c9f4d91143c47fda16cbae84e782b9168eed22fb46e01"} Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.945226 5016 scope.go:117] "RemoveContainer" containerID="2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654" Oct 11 07:59:12 crc kubenswrapper[5016]: E1011 07:59:12.945700 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654\": container with ID starting with 2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654 not found: ID does not exist" containerID="2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.945725 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654"} err="failed to get container status \"2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654\": rpc error: code = NotFound desc = could not find container \"2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654\": container with ID starting with 2b09fdd15dcda80291be0051053bb2ae668f0bfbf4d5d57e914de8ea8b4b8654 not found: ID does not exist" Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.997093 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-combined-ca-bundle\") pod \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.997299 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-config-data\") pod \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " Oct 11 07:59:12 crc kubenswrapper[5016]: I1011 07:59:12.997480 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54tjp\" (UniqueName: \"kubernetes.io/projected/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-kube-api-access-54tjp\") pod \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\" (UID: \"ca94bc65-ebb1-4ccb-bff6-7645d39255fd\") " Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.001339 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-kube-api-access-54tjp" (OuterVolumeSpecName: "kube-api-access-54tjp") pod "ca94bc65-ebb1-4ccb-bff6-7645d39255fd" (UID: "ca94bc65-ebb1-4ccb-bff6-7645d39255fd"). InnerVolumeSpecName "kube-api-access-54tjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.027950 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-config-data" (OuterVolumeSpecName: "config-data") pod "ca94bc65-ebb1-4ccb-bff6-7645d39255fd" (UID: "ca94bc65-ebb1-4ccb-bff6-7645d39255fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.035970 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca94bc65-ebb1-4ccb-bff6-7645d39255fd" (UID: "ca94bc65-ebb1-4ccb-bff6-7645d39255fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.099541 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54tjp\" (UniqueName: \"kubernetes.io/projected/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-kube-api-access-54tjp\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.099599 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.099612 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca94bc65-ebb1-4ccb-bff6-7645d39255fd-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.145675 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862c465e-619c-4fed-adf2-fe7d93b46937" path="/var/lib/kubelet/pods/862c465e-619c-4fed-adf2-fe7d93b46937/volumes" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.265579 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.271857 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.283435 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:59:13 crc kubenswrapper[5016]: E1011 07:59:13.283884 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca94bc65-ebb1-4ccb-bff6-7645d39255fd" containerName="nova-scheduler-scheduler" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.283905 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca94bc65-ebb1-4ccb-bff6-7645d39255fd" containerName="nova-scheduler-scheduler" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.284094 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca94bc65-ebb1-4ccb-bff6-7645d39255fd" containerName="nova-scheduler-scheduler" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.284705 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.287977 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.326052 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.406385 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghvbg\" (UniqueName: \"kubernetes.io/projected/b11896aa-37c5-4e47-9d73-73ca143b75b1-kube-api-access-ghvbg\") pod \"nova-scheduler-0\" (UID: \"b11896aa-37c5-4e47-9d73-73ca143b75b1\") " pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.406556 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11896aa-37c5-4e47-9d73-73ca143b75b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b11896aa-37c5-4e47-9d73-73ca143b75b1\") " pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.406782 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11896aa-37c5-4e47-9d73-73ca143b75b1-config-data\") pod \"nova-scheduler-0\" (UID: \"b11896aa-37c5-4e47-9d73-73ca143b75b1\") " pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.509077 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11896aa-37c5-4e47-9d73-73ca143b75b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b11896aa-37c5-4e47-9d73-73ca143b75b1\") " pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.509211 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11896aa-37c5-4e47-9d73-73ca143b75b1-config-data\") pod \"nova-scheduler-0\" (UID: \"b11896aa-37c5-4e47-9d73-73ca143b75b1\") " pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.509336 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghvbg\" (UniqueName: \"kubernetes.io/projected/b11896aa-37c5-4e47-9d73-73ca143b75b1-kube-api-access-ghvbg\") pod \"nova-scheduler-0\" (UID: \"b11896aa-37c5-4e47-9d73-73ca143b75b1\") " pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.512713 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11896aa-37c5-4e47-9d73-73ca143b75b1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b11896aa-37c5-4e47-9d73-73ca143b75b1\") " pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.513245 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11896aa-37c5-4e47-9d73-73ca143b75b1-config-data\") pod \"nova-scheduler-0\" (UID: \"b11896aa-37c5-4e47-9d73-73ca143b75b1\") " pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.528988 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghvbg\" (UniqueName: \"kubernetes.io/projected/b11896aa-37c5-4e47-9d73-73ca143b75b1-kube-api-access-ghvbg\") pod \"nova-scheduler-0\" (UID: \"b11896aa-37c5-4e47-9d73-73ca143b75b1\") " pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.603078 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.937496 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e182b619-d220-435a-80ed-74611b49f193","Type":"ContainerStarted","Data":"4799dbde12a47df4100f26bb446892bc1e9ce60bbbe722629e8e9f261d8a9708"} Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.937924 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e182b619-d220-435a-80ed-74611b49f193","Type":"ContainerStarted","Data":"e74f097414cf240c21d01fd9819086b70a46e99470bb748ee108d3a6550a801a"} Oct 11 07:59:13 crc kubenswrapper[5016]: I1011 07:59:13.963323 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9632977990000002 podStartE2EDuration="2.963297799s" podCreationTimestamp="2025-10-11 07:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:59:13.959858179 +0000 UTC m=+1141.860314125" watchObservedRunningTime="2025-10-11 07:59:13.963297799 +0000 UTC m=+1141.863753755" Oct 11 07:59:14 crc kubenswrapper[5016]: I1011 07:59:14.044948 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 11 07:59:14 crc kubenswrapper[5016]: W1011 07:59:14.058306 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb11896aa_37c5_4e47_9d73_73ca143b75b1.slice/crio-655de681ffd7508a3fe7534edabe73be41f97e167440ab0ea71d131d220be21a WatchSource:0}: Error finding container 655de681ffd7508a3fe7534edabe73be41f97e167440ab0ea71d131d220be21a: Status 404 returned error can't find the container with id 655de681ffd7508a3fe7534edabe73be41f97e167440ab0ea71d131d220be21a Oct 11 07:59:14 crc kubenswrapper[5016]: I1011 07:59:14.957559 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b11896aa-37c5-4e47-9d73-73ca143b75b1","Type":"ContainerStarted","Data":"b7eed50862ab4a1b1d6c97d79b6e1d4e73179daaa950fed83dcf7f4e8f337d06"} Oct 11 07:59:14 crc kubenswrapper[5016]: I1011 07:59:14.958064 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b11896aa-37c5-4e47-9d73-73ca143b75b1","Type":"ContainerStarted","Data":"655de681ffd7508a3fe7534edabe73be41f97e167440ab0ea71d131d220be21a"} Oct 11 07:59:14 crc kubenswrapper[5016]: I1011 07:59:14.994847 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.994820833 podStartE2EDuration="1.994820833s" podCreationTimestamp="2025-10-11 07:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:59:14.981947457 +0000 UTC m=+1142.882403443" watchObservedRunningTime="2025-10-11 07:59:14.994820833 +0000 UTC m=+1142.895276819" Oct 11 07:59:15 crc kubenswrapper[5016]: I1011 07:59:15.153823 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca94bc65-ebb1-4ccb-bff6-7645d39255fd" path="/var/lib/kubelet/pods/ca94bc65-ebb1-4ccb-bff6-7645d39255fd/volumes" Oct 11 07:59:17 crc kubenswrapper[5016]: I1011 07:59:17.291051 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 07:59:17 crc kubenswrapper[5016]: I1011 07:59:17.291468 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 11 07:59:18 crc kubenswrapper[5016]: I1011 07:59:18.603804 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 11 07:59:19 crc kubenswrapper[5016]: I1011 07:59:19.857760 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 07:59:19 crc kubenswrapper[5016]: I1011 07:59:19.858170 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 07:59:20 crc kubenswrapper[5016]: I1011 07:59:20.873920 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 07:59:20 crc kubenswrapper[5016]: I1011 07:59:20.873940 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 07:59:22 crc kubenswrapper[5016]: I1011 07:59:22.291352 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 07:59:22 crc kubenswrapper[5016]: I1011 07:59:22.292339 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 11 07:59:23 crc kubenswrapper[5016]: I1011 07:59:23.309934 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e182b619-d220-435a-80ed-74611b49f193" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 07:59:23 crc kubenswrapper[5016]: I1011 07:59:23.309934 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e182b619-d220-435a-80ed-74611b49f193" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 07:59:23 crc kubenswrapper[5016]: I1011 07:59:23.603631 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 11 07:59:23 crc kubenswrapper[5016]: I1011 07:59:23.647450 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 11 07:59:24 crc kubenswrapper[5016]: I1011 07:59:24.080382 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 11 07:59:27 crc kubenswrapper[5016]: I1011 07:59:27.109218 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 11 07:59:29 crc kubenswrapper[5016]: I1011 07:59:29.871253 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 07:59:29 crc kubenswrapper[5016]: I1011 07:59:29.874013 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 07:59:29 crc kubenswrapper[5016]: I1011 07:59:29.883142 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 07:59:29 crc kubenswrapper[5016]: I1011 07:59:29.883967 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 07:59:30 crc kubenswrapper[5016]: I1011 07:59:30.114290 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 07:59:30 crc kubenswrapper[5016]: I1011 07:59:30.119838 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 07:59:32 crc kubenswrapper[5016]: I1011 07:59:32.297334 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 07:59:32 crc kubenswrapper[5016]: I1011 07:59:32.297930 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 11 07:59:32 crc kubenswrapper[5016]: I1011 07:59:32.308570 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 07:59:32 crc kubenswrapper[5016]: I1011 07:59:32.310857 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 11 07:59:40 crc kubenswrapper[5016]: I1011 07:59:40.576642 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 07:59:41 crc kubenswrapper[5016]: I1011 07:59:41.517344 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 07:59:45 crc kubenswrapper[5016]: I1011 07:59:45.193401 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="67a018eb-911e-4491-9dae-a1dfb3172e05" containerName="rabbitmq" containerID="cri-o://9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1" gracePeriod=604796 Oct 11 07:59:45 crc kubenswrapper[5016]: I1011 07:59:45.634344 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="bae29196-1d16-4563-9e7d-0981a96a352f" containerName="rabbitmq" containerID="cri-o://68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1" gracePeriod=604796 Oct 11 07:59:47 crc kubenswrapper[5016]: I1011 07:59:47.272422 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="67a018eb-911e-4491-9dae-a1dfb3172e05" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Oct 11 07:59:47 crc kubenswrapper[5016]: I1011 07:59:47.555219 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="bae29196-1d16-4563-9e7d-0981a96a352f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Oct 11 07:59:51 crc kubenswrapper[5016]: I1011 07:59:51.990824 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.103719 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9tmz\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-kube-api-access-c9tmz\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.104174 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-confd\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.105039 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-server-conf\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.106646 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-tls\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.106761 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-plugins-conf\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.106780 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67a018eb-911e-4491-9dae-a1dfb3172e05-pod-info\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.106867 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67a018eb-911e-4491-9dae-a1dfb3172e05-erlang-cookie-secret\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.106899 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-plugins\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.106927 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-erlang-cookie\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.106950 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.106966 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-config-data\") pod \"67a018eb-911e-4491-9dae-a1dfb3172e05\" (UID: \"67a018eb-911e-4491-9dae-a1dfb3172e05\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.110881 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.111382 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.111416 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-kube-api-access-c9tmz" (OuterVolumeSpecName: "kube-api-access-c9tmz") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "kube-api-access-c9tmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.111871 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.120012 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a018eb-911e-4491-9dae-a1dfb3172e05-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.122560 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/67a018eb-911e-4491-9dae-a1dfb3172e05-pod-info" (OuterVolumeSpecName: "pod-info") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.126782 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.132894 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.157993 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-config-data" (OuterVolumeSpecName: "config-data") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.184765 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-server-conf" (OuterVolumeSpecName: "server-conf") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.200666 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209232 5016 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67a018eb-911e-4491-9dae-a1dfb3172e05-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209257 5016 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209266 5016 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209284 5016 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209294 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209302 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9tmz\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-kube-api-access-c9tmz\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209310 5016 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-server-conf\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209318 5016 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209326 5016 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67a018eb-911e-4491-9dae-a1dfb3172e05-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.209333 5016 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67a018eb-911e-4491-9dae-a1dfb3172e05-pod-info\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.247173 5016 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.271973 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "67a018eb-911e-4491-9dae-a1dfb3172e05" (UID: "67a018eb-911e-4491-9dae-a1dfb3172e05"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310282 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-plugins-conf\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310344 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-plugins\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310397 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-server-conf\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310419 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-confd\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310495 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bae29196-1d16-4563-9e7d-0981a96a352f-pod-info\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310558 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2fkv\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-kube-api-access-x2fkv\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310599 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310629 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bae29196-1d16-4563-9e7d-0981a96a352f-erlang-cookie-secret\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310689 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-erlang-cookie\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310722 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-tls\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.310739 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-config-data\") pod \"bae29196-1d16-4563-9e7d-0981a96a352f\" (UID: \"bae29196-1d16-4563-9e7d-0981a96a352f\") " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.311143 5016 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67a018eb-911e-4491-9dae-a1dfb3172e05-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.311167 5016 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.311942 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.311977 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.314680 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-kube-api-access-x2fkv" (OuterVolumeSpecName: "kube-api-access-x2fkv") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "kube-api-access-x2fkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.315060 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.316894 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/bae29196-1d16-4563-9e7d-0981a96a352f-pod-info" (OuterVolumeSpecName: "pod-info") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.317316 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae29196-1d16-4563-9e7d-0981a96a352f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.322559 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.339085 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.339429 5016 generic.go:334] "Generic (PLEG): container finished" podID="bae29196-1d16-4563-9e7d-0981a96a352f" containerID="68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1" exitCode=0 Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.339512 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.339509 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bae29196-1d16-4563-9e7d-0981a96a352f","Type":"ContainerDied","Data":"68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1"} Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.339606 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"bae29196-1d16-4563-9e7d-0981a96a352f","Type":"ContainerDied","Data":"4e578620178d7cef30bbb0915f3be000b0cb7383e788ab9f16312fe5e07264a2"} Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.339628 5016 scope.go:117] "RemoveContainer" containerID="68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.342913 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-config-data" (OuterVolumeSpecName: "config-data") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.342955 5016 generic.go:334] "Generic (PLEG): container finished" podID="67a018eb-911e-4491-9dae-a1dfb3172e05" containerID="9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1" exitCode=0 Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.343093 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67a018eb-911e-4491-9dae-a1dfb3172e05","Type":"ContainerDied","Data":"9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1"} Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.343256 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.343361 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67a018eb-911e-4491-9dae-a1dfb3172e05","Type":"ContainerDied","Data":"5e68c267711d3241cfc717117769c9604f357e37beae35151d523acfa8635879"} Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.366730 5016 scope.go:117] "RemoveContainer" containerID="3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.370586 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-server-conf" (OuterVolumeSpecName: "server-conf") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.400608 5016 scope.go:117] "RemoveContainer" containerID="68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1" Oct 11 07:59:52 crc kubenswrapper[5016]: E1011 07:59:52.401049 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1\": container with ID starting with 68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1 not found: ID does not exist" containerID="68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.401079 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1"} err="failed to get container status \"68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1\": rpc error: code = NotFound desc = could not find container \"68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1\": container with ID starting with 68beb7a34c7e2a08f1f40aef07d8f2f2992ec09be2feee39a5b0b5817aaf9ab1 not found: ID does not exist" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.401098 5016 scope.go:117] "RemoveContainer" containerID="3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c" Oct 11 07:59:52 crc kubenswrapper[5016]: E1011 07:59:52.401364 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c\": container with ID starting with 3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c not found: ID does not exist" containerID="3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.401387 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c"} err="failed to get container status \"3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c\": rpc error: code = NotFound desc = could not find container \"3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c\": container with ID starting with 3d68943f406a1d1d8566b5da25fdde5e8390a80f69134b7a73a0a0027cfd3e5c not found: ID does not exist" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.401399 5016 scope.go:117] "RemoveContainer" containerID="9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.404360 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412318 5016 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bae29196-1d16-4563-9e7d-0981a96a352f-pod-info\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412358 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2fkv\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-kube-api-access-x2fkv\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412379 5016 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412390 5016 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bae29196-1d16-4563-9e7d-0981a96a352f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412402 5016 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412414 5016 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412426 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412434 5016 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412442 5016 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.412460 5016 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bae29196-1d16-4563-9e7d-0981a96a352f-server-conf\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.430182 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.440244 5016 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.442375 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 07:59:52 crc kubenswrapper[5016]: E1011 07:59:52.442794 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae29196-1d16-4563-9e7d-0981a96a352f" containerName="rabbitmq" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.442808 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae29196-1d16-4563-9e7d-0981a96a352f" containerName="rabbitmq" Oct 11 07:59:52 crc kubenswrapper[5016]: E1011 07:59:52.442828 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a018eb-911e-4491-9dae-a1dfb3172e05" containerName="rabbitmq" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.442835 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a018eb-911e-4491-9dae-a1dfb3172e05" containerName="rabbitmq" Oct 11 07:59:52 crc kubenswrapper[5016]: E1011 07:59:52.442856 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae29196-1d16-4563-9e7d-0981a96a352f" containerName="setup-container" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.442863 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae29196-1d16-4563-9e7d-0981a96a352f" containerName="setup-container" Oct 11 07:59:52 crc kubenswrapper[5016]: E1011 07:59:52.442874 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a018eb-911e-4491-9dae-a1dfb3172e05" containerName="setup-container" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.442879 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a018eb-911e-4491-9dae-a1dfb3172e05" containerName="setup-container" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.443039 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a018eb-911e-4491-9dae-a1dfb3172e05" containerName="rabbitmq" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.443052 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae29196-1d16-4563-9e7d-0981a96a352f" containerName="rabbitmq" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.444667 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.449219 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-665lw" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.449391 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.456011 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.456910 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.457116 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.457253 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.457291 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.457373 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.460322 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "bae29196-1d16-4563-9e7d-0981a96a352f" (UID: "bae29196-1d16-4563-9e7d-0981a96a352f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.513834 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d694fc7-1470-43de-a417-fe670e0bace9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.513883 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.513922 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nwlb\" (UniqueName: \"kubernetes.io/projected/1d694fc7-1470-43de-a417-fe670e0bace9-kube-api-access-8nwlb\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.513970 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.514132 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d694fc7-1470-43de-a417-fe670e0bace9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.514156 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d694fc7-1470-43de-a417-fe670e0bace9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.514195 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d694fc7-1470-43de-a417-fe670e0bace9-config-data\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.514210 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.514246 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.514264 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.514279 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d694fc7-1470-43de-a417-fe670e0bace9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.514322 5016 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.514333 5016 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bae29196-1d16-4563-9e7d-0981a96a352f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.537020 5016 scope.go:117] "RemoveContainer" containerID="eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.574866 5016 scope.go:117] "RemoveContainer" containerID="9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1" Oct 11 07:59:52 crc kubenswrapper[5016]: E1011 07:59:52.577548 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1\": container with ID starting with 9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1 not found: ID does not exist" containerID="9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.577599 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1"} err="failed to get container status \"9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1\": rpc error: code = NotFound desc = could not find container \"9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1\": container with ID starting with 9c57af0c537256e2f384b930add4101c402778d3b22c083bc3c4b2987bb658f1 not found: ID does not exist" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.577634 5016 scope.go:117] "RemoveContainer" containerID="eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad" Oct 11 07:59:52 crc kubenswrapper[5016]: E1011 07:59:52.578207 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad\": container with ID starting with eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad not found: ID does not exist" containerID="eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.578249 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad"} err="failed to get container status \"eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad\": rpc error: code = NotFound desc = could not find container \"eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad\": container with ID starting with eb076986b74562535eb3c3836b33202cc4ddaa78a14b45090fb5fe3aaa857fad not found: ID does not exist" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.615782 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d694fc7-1470-43de-a417-fe670e0bace9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.615870 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d694fc7-1470-43de-a417-fe670e0bace9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.615906 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.615946 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nwlb\" (UniqueName: \"kubernetes.io/projected/1d694fc7-1470-43de-a417-fe670e0bace9-kube-api-access-8nwlb\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.615996 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.616066 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d694fc7-1470-43de-a417-fe670e0bace9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.616087 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d694fc7-1470-43de-a417-fe670e0bace9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.616110 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d694fc7-1470-43de-a417-fe670e0bace9-config-data\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.616130 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.616178 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.616199 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.616759 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1d694fc7-1470-43de-a417-fe670e0bace9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.617324 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.617772 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.617938 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d694fc7-1470-43de-a417-fe670e0bace9-config-data\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.618065 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.618364 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1d694fc7-1470-43de-a417-fe670e0bace9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.622359 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1d694fc7-1470-43de-a417-fe670e0bace9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.628007 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.631808 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1d694fc7-1470-43de-a417-fe670e0bace9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.635185 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nwlb\" (UniqueName: \"kubernetes.io/projected/1d694fc7-1470-43de-a417-fe670e0bace9-kube-api-access-8nwlb\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.635226 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1d694fc7-1470-43de-a417-fe670e0bace9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.656882 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"1d694fc7-1470-43de-a417-fe670e0bace9\") " pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.748238 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.758254 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.771333 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.773412 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.775814 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.776072 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.776134 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.776219 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.776321 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5mm85" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.776353 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.776592 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.787421 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.837834 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923048 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ab2c11e-c631-4f54-8f27-51c6fed6f548-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923144 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923176 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ab2c11e-c631-4f54-8f27-51c6fed6f548-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923269 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923351 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923392 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ab2c11e-c631-4f54-8f27-51c6fed6f548-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923431 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ab2c11e-c631-4f54-8f27-51c6fed6f548-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923506 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923538 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ab2c11e-c631-4f54-8f27-51c6fed6f548-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923568 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:52 crc kubenswrapper[5016]: I1011 07:59:52.923678 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wh2z\" (UniqueName: \"kubernetes.io/projected/8ab2c11e-c631-4f54-8f27-51c6fed6f548-kube-api-access-2wh2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.026323 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.026848 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ab2c11e-c631-4f54-8f27-51c6fed6f548-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.026869 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ab2c11e-c631-4f54-8f27-51c6fed6f548-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.026935 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.026964 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ab2c11e-c631-4f54-8f27-51c6fed6f548-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.026995 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.027032 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wh2z\" (UniqueName: \"kubernetes.io/projected/8ab2c11e-c631-4f54-8f27-51c6fed6f548-kube-api-access-2wh2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.027064 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ab2c11e-c631-4f54-8f27-51c6fed6f548-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.027150 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.027180 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ab2c11e-c631-4f54-8f27-51c6fed6f548-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.027217 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.027973 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.028302 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8ab2c11e-c631-4f54-8f27-51c6fed6f548-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.028588 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.028623 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.029027 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8ab2c11e-c631-4f54-8f27-51c6fed6f548-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.029638 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8ab2c11e-c631-4f54-8f27-51c6fed6f548-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.030244 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8ab2c11e-c631-4f54-8f27-51c6fed6f548-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.031001 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.031977 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8ab2c11e-c631-4f54-8f27-51c6fed6f548-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.032825 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8ab2c11e-c631-4f54-8f27-51c6fed6f548-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.054559 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wh2z\" (UniqueName: \"kubernetes.io/projected/8ab2c11e-c631-4f54-8f27-51c6fed6f548-kube-api-access-2wh2z\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.070628 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8ab2c11e-c631-4f54-8f27-51c6fed6f548\") " pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.104870 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 11 07:59:53 crc kubenswrapper[5016]: W1011 07:59:53.115947 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d694fc7_1470_43de_a417_fe670e0bace9.slice/crio-c70120b7dfb8e4351f073b3c1f5cfdf0aef9fa6ba5ec48d9535467567ffd5ec9 WatchSource:0}: Error finding container c70120b7dfb8e4351f073b3c1f5cfdf0aef9fa6ba5ec48d9535467567ffd5ec9: Status 404 returned error can't find the container with id c70120b7dfb8e4351f073b3c1f5cfdf0aef9fa6ba5ec48d9535467567ffd5ec9 Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.137185 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.166012 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a018eb-911e-4491-9dae-a1dfb3172e05" path="/var/lib/kubelet/pods/67a018eb-911e-4491-9dae-a1dfb3172e05/volumes" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.169998 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae29196-1d16-4563-9e7d-0981a96a352f" path="/var/lib/kubelet/pods/bae29196-1d16-4563-9e7d-0981a96a352f/volumes" Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.364677 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1d694fc7-1470-43de-a417-fe670e0bace9","Type":"ContainerStarted","Data":"c70120b7dfb8e4351f073b3c1f5cfdf0aef9fa6ba5ec48d9535467567ffd5ec9"} Oct 11 07:59:53 crc kubenswrapper[5016]: I1011 07:59:53.422736 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 11 07:59:54 crc kubenswrapper[5016]: I1011 07:59:54.378870 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ab2c11e-c631-4f54-8f27-51c6fed6f548","Type":"ContainerStarted","Data":"c245bdfb85bf84e40c19bf3937f550c72cfd485ec534dccdd4287521d1feea0a"} Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.390997 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ab2c11e-c631-4f54-8f27-51c6fed6f548","Type":"ContainerStarted","Data":"591ce38caf27b877456b9c43315807c9dfc5ad2383207e2198fdc105c2b7d6b5"} Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.393194 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1d694fc7-1470-43de-a417-fe670e0bace9","Type":"ContainerStarted","Data":"bd35c1af0aad80e9aa00071c5f5fd2f27bed560d7c86b97879f1a78c299a61ae"} Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.693248 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5745cbd8d7-9ggwc"] Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.695014 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.697320 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.721273 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5745cbd8d7-9ggwc"] Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.783439 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-sb\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.783496 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-dns-svc\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.783592 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdxh\" (UniqueName: \"kubernetes.io/projected/87fb508d-c0d7-41e0-aa7d-f9a13bced652-kube-api-access-5qdxh\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.783648 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-config\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.783781 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-nb\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.783805 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-openstack-edpm-ipam\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.885551 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdxh\" (UniqueName: \"kubernetes.io/projected/87fb508d-c0d7-41e0-aa7d-f9a13bced652-kube-api-access-5qdxh\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.885646 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-config\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.885758 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-nb\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.885795 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-openstack-edpm-ipam\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.885831 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-sb\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.885868 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-dns-svc\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.886571 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-config\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.886655 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-dns-svc\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.887146 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-sb\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.887316 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-nb\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.887620 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-openstack-edpm-ipam\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:55 crc kubenswrapper[5016]: I1011 07:59:55.908869 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdxh\" (UniqueName: \"kubernetes.io/projected/87fb508d-c0d7-41e0-aa7d-f9a13bced652-kube-api-access-5qdxh\") pod \"dnsmasq-dns-5745cbd8d7-9ggwc\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:56 crc kubenswrapper[5016]: I1011 07:59:56.036966 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:56 crc kubenswrapper[5016]: I1011 07:59:56.291753 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5745cbd8d7-9ggwc"] Oct 11 07:59:56 crc kubenswrapper[5016]: I1011 07:59:56.408727 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" event={"ID":"87fb508d-c0d7-41e0-aa7d-f9a13bced652","Type":"ContainerStarted","Data":"4d7b3fe6d7b2c2b811d9b992767d93e4fefc62d46f832b76b761a9e8acbed377"} Oct 11 07:59:57 crc kubenswrapper[5016]: I1011 07:59:57.420859 5016 generic.go:334] "Generic (PLEG): container finished" podID="87fb508d-c0d7-41e0-aa7d-f9a13bced652" containerID="d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7" exitCode=0 Oct 11 07:59:57 crc kubenswrapper[5016]: I1011 07:59:57.420932 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" event={"ID":"87fb508d-c0d7-41e0-aa7d-f9a13bced652","Type":"ContainerDied","Data":"d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7"} Oct 11 07:59:58 crc kubenswrapper[5016]: I1011 07:59:58.430972 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" event={"ID":"87fb508d-c0d7-41e0-aa7d-f9a13bced652","Type":"ContainerStarted","Data":"5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206"} Oct 11 07:59:58 crc kubenswrapper[5016]: I1011 07:59:58.431946 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 07:59:58 crc kubenswrapper[5016]: I1011 07:59:58.464583 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" podStartSLOduration=3.464550005 podStartE2EDuration="3.464550005s" podCreationTimestamp="2025-10-11 07:59:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 07:59:58.451632303 +0000 UTC m=+1186.352088269" watchObservedRunningTime="2025-10-11 07:59:58.464550005 +0000 UTC m=+1186.365006001" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.159956 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx"] Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.161773 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.164794 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.165219 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.176403 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx"] Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.267115 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05d96a07-ce5d-47d7-aad4-30553dd060ad-config-volume\") pod \"collect-profiles-29336160-9fjnx\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.267168 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05d96a07-ce5d-47d7-aad4-30553dd060ad-secret-volume\") pod \"collect-profiles-29336160-9fjnx\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.267199 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g75mt\" (UniqueName: \"kubernetes.io/projected/05d96a07-ce5d-47d7-aad4-30553dd060ad-kube-api-access-g75mt\") pod \"collect-profiles-29336160-9fjnx\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.369085 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05d96a07-ce5d-47d7-aad4-30553dd060ad-config-volume\") pod \"collect-profiles-29336160-9fjnx\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.369146 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05d96a07-ce5d-47d7-aad4-30553dd060ad-secret-volume\") pod \"collect-profiles-29336160-9fjnx\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.369176 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g75mt\" (UniqueName: \"kubernetes.io/projected/05d96a07-ce5d-47d7-aad4-30553dd060ad-kube-api-access-g75mt\") pod \"collect-profiles-29336160-9fjnx\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.370447 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05d96a07-ce5d-47d7-aad4-30553dd060ad-config-volume\") pod \"collect-profiles-29336160-9fjnx\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.383761 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05d96a07-ce5d-47d7-aad4-30553dd060ad-secret-volume\") pod \"collect-profiles-29336160-9fjnx\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.386633 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g75mt\" (UniqueName: \"kubernetes.io/projected/05d96a07-ce5d-47d7-aad4-30553dd060ad-kube-api-access-g75mt\") pod \"collect-profiles-29336160-9fjnx\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.481568 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:00 crc kubenswrapper[5016]: I1011 08:00:00.913716 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx"] Oct 11 08:00:00 crc kubenswrapper[5016]: W1011 08:00:00.925911 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05d96a07_ce5d_47d7_aad4_30553dd060ad.slice/crio-d0d1d25030812cd680ee9973c37bb7dbf210d3f05b73417e7a3a88e7f00c40a5 WatchSource:0}: Error finding container d0d1d25030812cd680ee9973c37bb7dbf210d3f05b73417e7a3a88e7f00c40a5: Status 404 returned error can't find the container with id d0d1d25030812cd680ee9973c37bb7dbf210d3f05b73417e7a3a88e7f00c40a5 Oct 11 08:00:01 crc kubenswrapper[5016]: I1011 08:00:01.461523 5016 generic.go:334] "Generic (PLEG): container finished" podID="05d96a07-ce5d-47d7-aad4-30553dd060ad" containerID="33cb1358fc7c65916c32a00259dc4c9fff7d10e89e2a1a3ff80cf94b877fa57a" exitCode=0 Oct 11 08:00:01 crc kubenswrapper[5016]: I1011 08:00:01.461619 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" event={"ID":"05d96a07-ce5d-47d7-aad4-30553dd060ad","Type":"ContainerDied","Data":"33cb1358fc7c65916c32a00259dc4c9fff7d10e89e2a1a3ff80cf94b877fa57a"} Oct 11 08:00:01 crc kubenswrapper[5016]: I1011 08:00:01.461876 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" event={"ID":"05d96a07-ce5d-47d7-aad4-30553dd060ad","Type":"ContainerStarted","Data":"d0d1d25030812cd680ee9973c37bb7dbf210d3f05b73417e7a3a88e7f00c40a5"} Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.775578 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.807615 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g75mt\" (UniqueName: \"kubernetes.io/projected/05d96a07-ce5d-47d7-aad4-30553dd060ad-kube-api-access-g75mt\") pod \"05d96a07-ce5d-47d7-aad4-30553dd060ad\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.807997 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05d96a07-ce5d-47d7-aad4-30553dd060ad-secret-volume\") pod \"05d96a07-ce5d-47d7-aad4-30553dd060ad\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.808054 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05d96a07-ce5d-47d7-aad4-30553dd060ad-config-volume\") pod \"05d96a07-ce5d-47d7-aad4-30553dd060ad\" (UID: \"05d96a07-ce5d-47d7-aad4-30553dd060ad\") " Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.809294 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05d96a07-ce5d-47d7-aad4-30553dd060ad-config-volume" (OuterVolumeSpecName: "config-volume") pod "05d96a07-ce5d-47d7-aad4-30553dd060ad" (UID: "05d96a07-ce5d-47d7-aad4-30553dd060ad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.813929 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d96a07-ce5d-47d7-aad4-30553dd060ad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "05d96a07-ce5d-47d7-aad4-30553dd060ad" (UID: "05d96a07-ce5d-47d7-aad4-30553dd060ad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.814016 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d96a07-ce5d-47d7-aad4-30553dd060ad-kube-api-access-g75mt" (OuterVolumeSpecName: "kube-api-access-g75mt") pod "05d96a07-ce5d-47d7-aad4-30553dd060ad" (UID: "05d96a07-ce5d-47d7-aad4-30553dd060ad"). InnerVolumeSpecName "kube-api-access-g75mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.909422 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/05d96a07-ce5d-47d7-aad4-30553dd060ad-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.909464 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05d96a07-ce5d-47d7-aad4-30553dd060ad-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:02 crc kubenswrapper[5016]: I1011 08:00:02.909477 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g75mt\" (UniqueName: \"kubernetes.io/projected/05d96a07-ce5d-47d7-aad4-30553dd060ad-kube-api-access-g75mt\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:03 crc kubenswrapper[5016]: I1011 08:00:03.483539 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" event={"ID":"05d96a07-ce5d-47d7-aad4-30553dd060ad","Type":"ContainerDied","Data":"d0d1d25030812cd680ee9973c37bb7dbf210d3f05b73417e7a3a88e7f00c40a5"} Oct 11 08:00:03 crc kubenswrapper[5016]: I1011 08:00:03.484061 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0d1d25030812cd680ee9973c37bb7dbf210d3f05b73417e7a3a88e7f00c40a5" Oct 11 08:00:03 crc kubenswrapper[5016]: I1011 08:00:03.483631 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.038939 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.108911 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869677f947-82f6z"] Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.109118 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869677f947-82f6z" podUID="18843252-f80a-450d-905c-f07e2bddddb0" containerName="dnsmasq-dns" containerID="cri-o://8d71fc3b80b4d818a14c1c21d00213bd170548b8b6b62a92c85a27a70859e79e" gracePeriod=10 Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.248038 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f5d87575-clqzw"] Oct 11 08:00:06 crc kubenswrapper[5016]: E1011 08:00:06.248361 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d96a07-ce5d-47d7-aad4-30553dd060ad" containerName="collect-profiles" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.248372 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d96a07-ce5d-47d7-aad4-30553dd060ad" containerName="collect-profiles" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.248549 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="05d96a07-ce5d-47d7-aad4-30553dd060ad" containerName="collect-profiles" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.249545 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.275162 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f5d87575-clqzw"] Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.376786 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-config\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.377210 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszk4\" (UniqueName: \"kubernetes.io/projected/45b51e7a-9892-4e64-ba8e-13e58364666b-kube-api-access-jszk4\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.378039 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.378113 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.378165 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.378210 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-dns-svc\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.479460 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszk4\" (UniqueName: \"kubernetes.io/projected/45b51e7a-9892-4e64-ba8e-13e58364666b-kube-api-access-jszk4\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.479596 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.479675 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.479715 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.479760 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-dns-svc\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.479811 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-config\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.481939 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-config\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.482594 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-sb\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.482753 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-nb\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.484267 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-openstack-edpm-ipam\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.485394 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-dns-svc\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.500550 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszk4\" (UniqueName: \"kubernetes.io/projected/45b51e7a-9892-4e64-ba8e-13e58364666b-kube-api-access-jszk4\") pod \"dnsmasq-dns-5f5d87575-clqzw\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.538490 5016 generic.go:334] "Generic (PLEG): container finished" podID="18843252-f80a-450d-905c-f07e2bddddb0" containerID="8d71fc3b80b4d818a14c1c21d00213bd170548b8b6b62a92c85a27a70859e79e" exitCode=0 Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.538550 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869677f947-82f6z" event={"ID":"18843252-f80a-450d-905c-f07e2bddddb0","Type":"ContainerDied","Data":"8d71fc3b80b4d818a14c1c21d00213bd170548b8b6b62a92c85a27a70859e79e"} Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.591418 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.685830 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.793483 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-nb\") pod \"18843252-f80a-450d-905c-f07e2bddddb0\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.793538 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-dns-svc\") pod \"18843252-f80a-450d-905c-f07e2bddddb0\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.793567 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k28h8\" (UniqueName: \"kubernetes.io/projected/18843252-f80a-450d-905c-f07e2bddddb0-kube-api-access-k28h8\") pod \"18843252-f80a-450d-905c-f07e2bddddb0\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.793631 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-sb\") pod \"18843252-f80a-450d-905c-f07e2bddddb0\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.794219 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-config\") pod \"18843252-f80a-450d-905c-f07e2bddddb0\" (UID: \"18843252-f80a-450d-905c-f07e2bddddb0\") " Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.801950 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18843252-f80a-450d-905c-f07e2bddddb0-kube-api-access-k28h8" (OuterVolumeSpecName: "kube-api-access-k28h8") pod "18843252-f80a-450d-905c-f07e2bddddb0" (UID: "18843252-f80a-450d-905c-f07e2bddddb0"). InnerVolumeSpecName "kube-api-access-k28h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.845043 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "18843252-f80a-450d-905c-f07e2bddddb0" (UID: "18843252-f80a-450d-905c-f07e2bddddb0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.849247 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f5d87575-clqzw"] Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.849687 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "18843252-f80a-450d-905c-f07e2bddddb0" (UID: "18843252-f80a-450d-905c-f07e2bddddb0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.859029 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-config" (OuterVolumeSpecName: "config") pod "18843252-f80a-450d-905c-f07e2bddddb0" (UID: "18843252-f80a-450d-905c-f07e2bddddb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.860093 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "18843252-f80a-450d-905c-f07e2bddddb0" (UID: "18843252-f80a-450d-905c-f07e2bddddb0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.896587 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k28h8\" (UniqueName: \"kubernetes.io/projected/18843252-f80a-450d-905c-f07e2bddddb0-kube-api-access-k28h8\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.896926 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.897020 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-config\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.897103 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:06 crc kubenswrapper[5016]: I1011 08:00:06.897182 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/18843252-f80a-450d-905c-f07e2bddddb0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:07 crc kubenswrapper[5016]: I1011 08:00:07.549206 5016 generic.go:334] "Generic (PLEG): container finished" podID="45b51e7a-9892-4e64-ba8e-13e58364666b" containerID="89c38a0433a54425aa3d68e595eb80532309286ba30f24e1de9b53bef9ab6692" exitCode=0 Oct 11 08:00:07 crc kubenswrapper[5016]: I1011 08:00:07.549313 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" event={"ID":"45b51e7a-9892-4e64-ba8e-13e58364666b","Type":"ContainerDied","Data":"89c38a0433a54425aa3d68e595eb80532309286ba30f24e1de9b53bef9ab6692"} Oct 11 08:00:07 crc kubenswrapper[5016]: I1011 08:00:07.551882 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" event={"ID":"45b51e7a-9892-4e64-ba8e-13e58364666b","Type":"ContainerStarted","Data":"fc235fba927daac06ea270f72880d79285bf1d80afccc5b83a22a7664cb88b16"} Oct 11 08:00:07 crc kubenswrapper[5016]: I1011 08:00:07.554763 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869677f947-82f6z" event={"ID":"18843252-f80a-450d-905c-f07e2bddddb0","Type":"ContainerDied","Data":"4f55afe6003b9fa0dd1f7e58aada853c9bea811365bdbc41ff9b42e37dbf6557"} Oct 11 08:00:07 crc kubenswrapper[5016]: I1011 08:00:07.554809 5016 scope.go:117] "RemoveContainer" containerID="8d71fc3b80b4d818a14c1c21d00213bd170548b8b6b62a92c85a27a70859e79e" Oct 11 08:00:07 crc kubenswrapper[5016]: I1011 08:00:07.554954 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869677f947-82f6z" Oct 11 08:00:07 crc kubenswrapper[5016]: I1011 08:00:07.675400 5016 scope.go:117] "RemoveContainer" containerID="a1f1439715e312d96b0cb9f472c53244e0d0f462b2b973300031d9c3fe171640" Oct 11 08:00:07 crc kubenswrapper[5016]: I1011 08:00:07.722142 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869677f947-82f6z"] Oct 11 08:00:07 crc kubenswrapper[5016]: I1011 08:00:07.738175 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869677f947-82f6z"] Oct 11 08:00:08 crc kubenswrapper[5016]: I1011 08:00:08.568692 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" event={"ID":"45b51e7a-9892-4e64-ba8e-13e58364666b","Type":"ContainerStarted","Data":"6b236323e7f7bbcd9e8ffec90d8e40b6cf0e43fd9152aef1cf6e626c4aa0c639"} Oct 11 08:00:08 crc kubenswrapper[5016]: I1011 08:00:08.568996 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:08 crc kubenswrapper[5016]: I1011 08:00:08.600627 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" podStartSLOduration=2.600606885 podStartE2EDuration="2.600606885s" podCreationTimestamp="2025-10-11 08:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:00:08.595829748 +0000 UTC m=+1196.496285704" watchObservedRunningTime="2025-10-11 08:00:08.600606885 +0000 UTC m=+1196.501062831" Oct 11 08:00:09 crc kubenswrapper[5016]: I1011 08:00:09.153191 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18843252-f80a-450d-905c-f07e2bddddb0" path="/var/lib/kubelet/pods/18843252-f80a-450d-905c-f07e2bddddb0/volumes" Oct 11 08:00:16 crc kubenswrapper[5016]: I1011 08:00:16.593341 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:00:16 crc kubenswrapper[5016]: I1011 08:00:16.685913 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5745cbd8d7-9ggwc"] Oct 11 08:00:16 crc kubenswrapper[5016]: I1011 08:00:16.686265 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" podUID="87fb508d-c0d7-41e0-aa7d-f9a13bced652" containerName="dnsmasq-dns" containerID="cri-o://5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206" gracePeriod=10 Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.192241 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.298917 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-config\") pod \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.299052 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-nb\") pod \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.299130 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-dns-svc\") pod \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.299150 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qdxh\" (UniqueName: \"kubernetes.io/projected/87fb508d-c0d7-41e0-aa7d-f9a13bced652-kube-api-access-5qdxh\") pod \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.299180 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-sb\") pod \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.300222 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-openstack-edpm-ipam\") pod \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\" (UID: \"87fb508d-c0d7-41e0-aa7d-f9a13bced652\") " Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.306444 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fb508d-c0d7-41e0-aa7d-f9a13bced652-kube-api-access-5qdxh" (OuterVolumeSpecName: "kube-api-access-5qdxh") pod "87fb508d-c0d7-41e0-aa7d-f9a13bced652" (UID: "87fb508d-c0d7-41e0-aa7d-f9a13bced652"). InnerVolumeSpecName "kube-api-access-5qdxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.348904 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87fb508d-c0d7-41e0-aa7d-f9a13bced652" (UID: "87fb508d-c0d7-41e0-aa7d-f9a13bced652"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.353875 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "87fb508d-c0d7-41e0-aa7d-f9a13bced652" (UID: "87fb508d-c0d7-41e0-aa7d-f9a13bced652"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.358267 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "87fb508d-c0d7-41e0-aa7d-f9a13bced652" (UID: "87fb508d-c0d7-41e0-aa7d-f9a13bced652"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.359134 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "87fb508d-c0d7-41e0-aa7d-f9a13bced652" (UID: "87fb508d-c0d7-41e0-aa7d-f9a13bced652"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.361182 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-config" (OuterVolumeSpecName: "config") pod "87fb508d-c0d7-41e0-aa7d-f9a13bced652" (UID: "87fb508d-c0d7-41e0-aa7d-f9a13bced652"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.404670 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.404706 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-config\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.404716 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.404724 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.404733 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qdxh\" (UniqueName: \"kubernetes.io/projected/87fb508d-c0d7-41e0-aa7d-f9a13bced652-kube-api-access-5qdxh\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.404743 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87fb508d-c0d7-41e0-aa7d-f9a13bced652-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.693641 5016 generic.go:334] "Generic (PLEG): container finished" podID="87fb508d-c0d7-41e0-aa7d-f9a13bced652" containerID="5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206" exitCode=0 Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.693726 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.693722 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" event={"ID":"87fb508d-c0d7-41e0-aa7d-f9a13bced652","Type":"ContainerDied","Data":"5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206"} Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.693780 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5745cbd8d7-9ggwc" event={"ID":"87fb508d-c0d7-41e0-aa7d-f9a13bced652","Type":"ContainerDied","Data":"4d7b3fe6d7b2c2b811d9b992767d93e4fefc62d46f832b76b761a9e8acbed377"} Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.693804 5016 scope.go:117] "RemoveContainer" containerID="5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.730471 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5745cbd8d7-9ggwc"] Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.730566 5016 scope.go:117] "RemoveContainer" containerID="d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.738694 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5745cbd8d7-9ggwc"] Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.749382 5016 scope.go:117] "RemoveContainer" containerID="5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206" Oct 11 08:00:17 crc kubenswrapper[5016]: E1011 08:00:17.749908 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206\": container with ID starting with 5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206 not found: ID does not exist" containerID="5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.749995 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206"} err="failed to get container status \"5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206\": rpc error: code = NotFound desc = could not find container \"5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206\": container with ID starting with 5f7da840a851c702c2db8345855ed9a85c059b8923f4281efce1b7a15dd21206 not found: ID does not exist" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.750047 5016 scope.go:117] "RemoveContainer" containerID="d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7" Oct 11 08:00:17 crc kubenswrapper[5016]: E1011 08:00:17.750372 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7\": container with ID starting with d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7 not found: ID does not exist" containerID="d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7" Oct 11 08:00:17 crc kubenswrapper[5016]: I1011 08:00:17.750393 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7"} err="failed to get container status \"d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7\": rpc error: code = NotFound desc = could not find container \"d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7\": container with ID starting with d21a1acdfc9a97cd4a3ad637d8842f44a0fc19b8b3f093b902523867d7c85fb7 not found: ID does not exist" Oct 11 08:00:19 crc kubenswrapper[5016]: I1011 08:00:19.147111 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fb508d-c0d7-41e0-aa7d-f9a13bced652" path="/var/lib/kubelet/pods/87fb508d-c0d7-41e0-aa7d-f9a13bced652/volumes" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.848600 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr"] Oct 11 08:00:26 crc kubenswrapper[5016]: E1011 08:00:26.849280 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fb508d-c0d7-41e0-aa7d-f9a13bced652" containerName="init" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.849293 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fb508d-c0d7-41e0-aa7d-f9a13bced652" containerName="init" Oct 11 08:00:26 crc kubenswrapper[5016]: E1011 08:00:26.849310 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18843252-f80a-450d-905c-f07e2bddddb0" containerName="dnsmasq-dns" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.849316 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="18843252-f80a-450d-905c-f07e2bddddb0" containerName="dnsmasq-dns" Oct 11 08:00:26 crc kubenswrapper[5016]: E1011 08:00:26.849327 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fb508d-c0d7-41e0-aa7d-f9a13bced652" containerName="dnsmasq-dns" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.849335 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fb508d-c0d7-41e0-aa7d-f9a13bced652" containerName="dnsmasq-dns" Oct 11 08:00:26 crc kubenswrapper[5016]: E1011 08:00:26.849350 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18843252-f80a-450d-905c-f07e2bddddb0" containerName="init" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.849356 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="18843252-f80a-450d-905c-f07e2bddddb0" containerName="init" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.849517 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="18843252-f80a-450d-905c-f07e2bddddb0" containerName="dnsmasq-dns" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.849542 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="87fb508d-c0d7-41e0-aa7d-f9a13bced652" containerName="dnsmasq-dns" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.850095 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.855241 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.855633 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.856007 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.856108 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.863782 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr"] Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.990024 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.990373 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2vl4\" (UniqueName: \"kubernetes.io/projected/13561672-ecec-49ba-8618-8f7a3fcddb7e-kube-api-access-g2vl4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.990413 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:26 crc kubenswrapper[5016]: I1011 08:00:26.990565 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.092389 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.092514 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.092549 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2vl4\" (UniqueName: \"kubernetes.io/projected/13561672-ecec-49ba-8618-8f7a3fcddb7e-kube-api-access-g2vl4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.092588 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.098001 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.098204 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.098738 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.110859 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2vl4\" (UniqueName: \"kubernetes.io/projected/13561672-ecec-49ba-8618-8f7a3fcddb7e-kube-api-access-g2vl4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.204541 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.737034 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr"] Oct 11 08:00:27 crc kubenswrapper[5016]: W1011 08:00:27.741917 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13561672_ecec_49ba_8618_8f7a3fcddb7e.slice/crio-a0b101dedbf0c5305cd8d2eb06fce43ed7c4905ea855497355746eac045a7b36 WatchSource:0}: Error finding container a0b101dedbf0c5305cd8d2eb06fce43ed7c4905ea855497355746eac045a7b36: Status 404 returned error can't find the container with id a0b101dedbf0c5305cd8d2eb06fce43ed7c4905ea855497355746eac045a7b36 Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.744666 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.789820 5016 generic.go:334] "Generic (PLEG): container finished" podID="8ab2c11e-c631-4f54-8f27-51c6fed6f548" containerID="591ce38caf27b877456b9c43315807c9dfc5ad2383207e2198fdc105c2b7d6b5" exitCode=0 Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.789891 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ab2c11e-c631-4f54-8f27-51c6fed6f548","Type":"ContainerDied","Data":"591ce38caf27b877456b9c43315807c9dfc5ad2383207e2198fdc105c2b7d6b5"} Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.792030 5016 generic.go:334] "Generic (PLEG): container finished" podID="1d694fc7-1470-43de-a417-fe670e0bace9" containerID="bd35c1af0aad80e9aa00071c5f5fd2f27bed560d7c86b97879f1a78c299a61ae" exitCode=0 Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.792100 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1d694fc7-1470-43de-a417-fe670e0bace9","Type":"ContainerDied","Data":"bd35c1af0aad80e9aa00071c5f5fd2f27bed560d7c86b97879f1a78c299a61ae"} Oct 11 08:00:27 crc kubenswrapper[5016]: I1011 08:00:27.794052 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" event={"ID":"13561672-ecec-49ba-8618-8f7a3fcddb7e","Type":"ContainerStarted","Data":"a0b101dedbf0c5305cd8d2eb06fce43ed7c4905ea855497355746eac045a7b36"} Oct 11 08:00:28 crc kubenswrapper[5016]: I1011 08:00:28.807104 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8ab2c11e-c631-4f54-8f27-51c6fed6f548","Type":"ContainerStarted","Data":"9496b4cfaf9bc921e53912ebadc2ce771cd0934142492f5d10735bca4ba19f6c"} Oct 11 08:00:28 crc kubenswrapper[5016]: I1011 08:00:28.808615 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 11 08:00:28 crc kubenswrapper[5016]: I1011 08:00:28.810514 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1d694fc7-1470-43de-a417-fe670e0bace9","Type":"ContainerStarted","Data":"c4c887b86f2289ce98cd3e38874c546ca137b28ab5e0abd2961397e6814fe2a8"} Oct 11 08:00:28 crc kubenswrapper[5016]: I1011 08:00:28.810778 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 11 08:00:28 crc kubenswrapper[5016]: I1011 08:00:28.833019 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.833002456 podStartE2EDuration="36.833002456s" podCreationTimestamp="2025-10-11 07:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:00:28.830984973 +0000 UTC m=+1216.731440909" watchObservedRunningTime="2025-10-11 08:00:28.833002456 +0000 UTC m=+1216.733458402" Oct 11 08:00:28 crc kubenswrapper[5016]: I1011 08:00:28.871957 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.871938876 podStartE2EDuration="36.871938876s" podCreationTimestamp="2025-10-11 07:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:00:28.86528764 +0000 UTC m=+1216.765743586" watchObservedRunningTime="2025-10-11 08:00:28.871938876 +0000 UTC m=+1216.772394822" Oct 11 08:00:37 crc kubenswrapper[5016]: I1011 08:00:37.121982 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:00:37 crc kubenswrapper[5016]: I1011 08:00:37.122292 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:00:37 crc kubenswrapper[5016]: I1011 08:00:37.900341 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" event={"ID":"13561672-ecec-49ba-8618-8f7a3fcddb7e","Type":"ContainerStarted","Data":"2120ad966597ebb2bb54c41d8407a44dd89fb77c469e4d21ab4019e5dadf05ff"} Oct 11 08:00:37 crc kubenswrapper[5016]: I1011 08:00:37.928457 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" podStartSLOduration=2.55912723 podStartE2EDuration="11.928429363s" podCreationTimestamp="2025-10-11 08:00:26 +0000 UTC" firstStartedPulling="2025-10-11 08:00:27.744385474 +0000 UTC m=+1215.644841420" lastFinishedPulling="2025-10-11 08:00:37.113687597 +0000 UTC m=+1225.014143553" observedRunningTime="2025-10-11 08:00:37.918341816 +0000 UTC m=+1225.818797803" watchObservedRunningTime="2025-10-11 08:00:37.928429363 +0000 UTC m=+1225.828885329" Oct 11 08:00:42 crc kubenswrapper[5016]: I1011 08:00:42.841030 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 11 08:00:43 crc kubenswrapper[5016]: I1011 08:00:43.147035 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 11 08:00:49 crc kubenswrapper[5016]: I1011 08:00:49.010204 5016 generic.go:334] "Generic (PLEG): container finished" podID="13561672-ecec-49ba-8618-8f7a3fcddb7e" containerID="2120ad966597ebb2bb54c41d8407a44dd89fb77c469e4d21ab4019e5dadf05ff" exitCode=0 Oct 11 08:00:49 crc kubenswrapper[5016]: I1011 08:00:49.010243 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" event={"ID":"13561672-ecec-49ba-8618-8f7a3fcddb7e","Type":"ContainerDied","Data":"2120ad966597ebb2bb54c41d8407a44dd89fb77c469e4d21ab4019e5dadf05ff"} Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.427147 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.548731 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-repo-setup-combined-ca-bundle\") pod \"13561672-ecec-49ba-8618-8f7a3fcddb7e\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.549162 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-ssh-key\") pod \"13561672-ecec-49ba-8618-8f7a3fcddb7e\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.549233 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-inventory\") pod \"13561672-ecec-49ba-8618-8f7a3fcddb7e\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.549269 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2vl4\" (UniqueName: \"kubernetes.io/projected/13561672-ecec-49ba-8618-8f7a3fcddb7e-kube-api-access-g2vl4\") pod \"13561672-ecec-49ba-8618-8f7a3fcddb7e\" (UID: \"13561672-ecec-49ba-8618-8f7a3fcddb7e\") " Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.554861 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13561672-ecec-49ba-8618-8f7a3fcddb7e-kube-api-access-g2vl4" (OuterVolumeSpecName: "kube-api-access-g2vl4") pod "13561672-ecec-49ba-8618-8f7a3fcddb7e" (UID: "13561672-ecec-49ba-8618-8f7a3fcddb7e"). InnerVolumeSpecName "kube-api-access-g2vl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.555930 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "13561672-ecec-49ba-8618-8f7a3fcddb7e" (UID: "13561672-ecec-49ba-8618-8f7a3fcddb7e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.582911 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-inventory" (OuterVolumeSpecName: "inventory") pod "13561672-ecec-49ba-8618-8f7a3fcddb7e" (UID: "13561672-ecec-49ba-8618-8f7a3fcddb7e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.588285 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "13561672-ecec-49ba-8618-8f7a3fcddb7e" (UID: "13561672-ecec-49ba-8618-8f7a3fcddb7e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.651212 5016 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.651243 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.651255 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13561672-ecec-49ba-8618-8f7a3fcddb7e-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:50 crc kubenswrapper[5016]: I1011 08:00:50.651263 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2vl4\" (UniqueName: \"kubernetes.io/projected/13561672-ecec-49ba-8618-8f7a3fcddb7e-kube-api-access-g2vl4\") on node \"crc\" DevicePath \"\"" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.034843 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" event={"ID":"13561672-ecec-49ba-8618-8f7a3fcddb7e","Type":"ContainerDied","Data":"a0b101dedbf0c5305cd8d2eb06fce43ed7c4905ea855497355746eac045a7b36"} Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.034892 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0b101dedbf0c5305cd8d2eb06fce43ed7c4905ea855497355746eac045a7b36" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.034904 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.156815 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj"] Oct 11 08:00:51 crc kubenswrapper[5016]: E1011 08:00:51.158711 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13561672-ecec-49ba-8618-8f7a3fcddb7e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.172107 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="13561672-ecec-49ba-8618-8f7a3fcddb7e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.172589 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="13561672-ecec-49ba-8618-8f7a3fcddb7e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.173353 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj"] Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.173814 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.175350 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.175565 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.175605 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.175918 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:00:51 crc kubenswrapper[5016]: E1011 08:00:51.257921 5016 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13561672_ecec_49ba_8618_8f7a3fcddb7e.slice\": RecentStats: unable to find data in memory cache]" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.266140 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.266182 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.267283 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.267326 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kxtv\" (UniqueName: \"kubernetes.io/projected/da5ee152-38b6-41b3-8c8c-c1051b5621f5-kube-api-access-2kxtv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.370049 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.370440 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.370508 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.370528 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kxtv\" (UniqueName: \"kubernetes.io/projected/da5ee152-38b6-41b3-8c8c-c1051b5621f5-kube-api-access-2kxtv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.374351 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.375769 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.377170 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.387994 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kxtv\" (UniqueName: \"kubernetes.io/projected/da5ee152-38b6-41b3-8c8c-c1051b5621f5-kube-api-access-2kxtv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:51 crc kubenswrapper[5016]: I1011 08:00:51.494225 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:00:52 crc kubenswrapper[5016]: I1011 08:00:52.056615 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj"] Oct 11 08:00:52 crc kubenswrapper[5016]: W1011 08:00:52.066018 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda5ee152_38b6_41b3_8c8c_c1051b5621f5.slice/crio-1fdf0e7fa91546d2c32b8921b05093af92fbf5e36cda0277f804bfbeb140ea0d WatchSource:0}: Error finding container 1fdf0e7fa91546d2c32b8921b05093af92fbf5e36cda0277f804bfbeb140ea0d: Status 404 returned error can't find the container with id 1fdf0e7fa91546d2c32b8921b05093af92fbf5e36cda0277f804bfbeb140ea0d Oct 11 08:00:53 crc kubenswrapper[5016]: I1011 08:00:53.054729 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" event={"ID":"da5ee152-38b6-41b3-8c8c-c1051b5621f5","Type":"ContainerStarted","Data":"06df628d62c66d31a62667935d049e242739dee18059b3a6dd107b701600be1d"} Oct 11 08:00:53 crc kubenswrapper[5016]: I1011 08:00:53.055106 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" event={"ID":"da5ee152-38b6-41b3-8c8c-c1051b5621f5","Type":"ContainerStarted","Data":"1fdf0e7fa91546d2c32b8921b05093af92fbf5e36cda0277f804bfbeb140ea0d"} Oct 11 08:00:53 crc kubenswrapper[5016]: I1011 08:00:53.080320 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" podStartSLOduration=1.5911172649999998 podStartE2EDuration="2.080296712s" podCreationTimestamp="2025-10-11 08:00:51 +0000 UTC" firstStartedPulling="2025-10-11 08:00:52.070763932 +0000 UTC m=+1239.971219878" lastFinishedPulling="2025-10-11 08:00:52.559943379 +0000 UTC m=+1240.460399325" observedRunningTime="2025-10-11 08:00:53.070745178 +0000 UTC m=+1240.971201134" watchObservedRunningTime="2025-10-11 08:00:53.080296712 +0000 UTC m=+1240.980752678" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.146177 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29336161-ldq74"] Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.148565 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.162830 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29336161-ldq74"] Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.237739 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-config-data\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.237806 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6h29\" (UniqueName: \"kubernetes.io/projected/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-kube-api-access-j6h29\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.238091 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-fernet-keys\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.238228 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-combined-ca-bundle\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.340590 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-config-data\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.340752 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6h29\" (UniqueName: \"kubernetes.io/projected/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-kube-api-access-j6h29\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.340887 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-fernet-keys\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.340954 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-combined-ca-bundle\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.353696 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-config-data\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.353738 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-combined-ca-bundle\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.357780 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-fernet-keys\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.360805 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6h29\" (UniqueName: \"kubernetes.io/projected/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-kube-api-access-j6h29\") pod \"keystone-cron-29336161-ldq74\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:00 crc kubenswrapper[5016]: I1011 08:01:00.468996 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:01 crc kubenswrapper[5016]: I1011 08:01:01.510766 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29336161-ldq74"] Oct 11 08:01:02 crc kubenswrapper[5016]: I1011 08:01:02.148534 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336161-ldq74" event={"ID":"26b2ff87-f55d-42c6-8bfc-19b73cfe7582","Type":"ContainerStarted","Data":"ebabd360678849a2be45305ec8fca59a811ec3c1db2c3b7cf3c0a3d5a0f87457"} Oct 11 08:01:02 crc kubenswrapper[5016]: I1011 08:01:02.149093 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336161-ldq74" event={"ID":"26b2ff87-f55d-42c6-8bfc-19b73cfe7582","Type":"ContainerStarted","Data":"202b24e83ab2bc58aac8a0a3816b1702639dbb2dd4fa9f68dd3f6a8e6a4aa1d2"} Oct 11 08:01:02 crc kubenswrapper[5016]: I1011 08:01:02.184395 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29336161-ldq74" podStartSLOduration=2.184377889 podStartE2EDuration="2.184377889s" podCreationTimestamp="2025-10-11 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:01:02.174834176 +0000 UTC m=+1250.075290132" watchObservedRunningTime="2025-10-11 08:01:02.184377889 +0000 UTC m=+1250.084833835" Oct 11 08:01:04 crc kubenswrapper[5016]: I1011 08:01:04.183975 5016 generic.go:334] "Generic (PLEG): container finished" podID="26b2ff87-f55d-42c6-8bfc-19b73cfe7582" containerID="ebabd360678849a2be45305ec8fca59a811ec3c1db2c3b7cf3c0a3d5a0f87457" exitCode=0 Oct 11 08:01:04 crc kubenswrapper[5016]: I1011 08:01:04.184119 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336161-ldq74" event={"ID":"26b2ff87-f55d-42c6-8bfc-19b73cfe7582","Type":"ContainerDied","Data":"ebabd360678849a2be45305ec8fca59a811ec3c1db2c3b7cf3c0a3d5a0f87457"} Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.512091 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.653045 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-config-data\") pod \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.653155 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-combined-ca-bundle\") pod \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.653186 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6h29\" (UniqueName: \"kubernetes.io/projected/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-kube-api-access-j6h29\") pod \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.653294 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-fernet-keys\") pod \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\" (UID: \"26b2ff87-f55d-42c6-8bfc-19b73cfe7582\") " Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.659500 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-kube-api-access-j6h29" (OuterVolumeSpecName: "kube-api-access-j6h29") pod "26b2ff87-f55d-42c6-8bfc-19b73cfe7582" (UID: "26b2ff87-f55d-42c6-8bfc-19b73cfe7582"). InnerVolumeSpecName "kube-api-access-j6h29". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.659952 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "26b2ff87-f55d-42c6-8bfc-19b73cfe7582" (UID: "26b2ff87-f55d-42c6-8bfc-19b73cfe7582"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.703988 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26b2ff87-f55d-42c6-8bfc-19b73cfe7582" (UID: "26b2ff87-f55d-42c6-8bfc-19b73cfe7582"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.755112 5016 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.755156 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.755169 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6h29\" (UniqueName: \"kubernetes.io/projected/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-kube-api-access-j6h29\") on node \"crc\" DevicePath \"\"" Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.760865 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-config-data" (OuterVolumeSpecName: "config-data") pod "26b2ff87-f55d-42c6-8bfc-19b73cfe7582" (UID: "26b2ff87-f55d-42c6-8bfc-19b73cfe7582"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:01:05 crc kubenswrapper[5016]: I1011 08:01:05.857020 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26b2ff87-f55d-42c6-8bfc-19b73cfe7582-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 08:01:06 crc kubenswrapper[5016]: I1011 08:01:06.203340 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336161-ldq74" event={"ID":"26b2ff87-f55d-42c6-8bfc-19b73cfe7582","Type":"ContainerDied","Data":"202b24e83ab2bc58aac8a0a3816b1702639dbb2dd4fa9f68dd3f6a8e6a4aa1d2"} Oct 11 08:01:06 crc kubenswrapper[5016]: I1011 08:01:06.203819 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="202b24e83ab2bc58aac8a0a3816b1702639dbb2dd4fa9f68dd3f6a8e6a4aa1d2" Oct 11 08:01:06 crc kubenswrapper[5016]: I1011 08:01:06.203427 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336161-ldq74" Oct 11 08:01:07 crc kubenswrapper[5016]: I1011 08:01:07.122751 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:01:07 crc kubenswrapper[5016]: I1011 08:01:07.122848 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.122361 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.123019 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.123071 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.123912 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e27921f9485ad7dd5682c9472508ee14b957ef603b8b328e13450c313534ce6"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.124000 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://6e27921f9485ad7dd5682c9472508ee14b957ef603b8b328e13450c313534ce6" gracePeriod=600 Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.142467 5016 scope.go:117] "RemoveContainer" containerID="5342842da9c803c984b029f42bd27bd9d76ed7f9e154d5bc6f083a318c90ddbe" Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.192722 5016 scope.go:117] "RemoveContainer" containerID="645c74cd4d074f2068ec7e178fb1cf4d4773467650584d85382bef4d15b5a11b" Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.214860 5016 scope.go:117] "RemoveContainer" containerID="e86c6cf4194f1a8aed5902496893af469e657fde65811a38c15d004bf1604745" Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.530961 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="6e27921f9485ad7dd5682c9472508ee14b957ef603b8b328e13450c313534ce6" exitCode=0 Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.531032 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"6e27921f9485ad7dd5682c9472508ee14b957ef603b8b328e13450c313534ce6"} Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.531441 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9"} Oct 11 08:01:37 crc kubenswrapper[5016]: I1011 08:01:37.531478 5016 scope.go:117] "RemoveContainer" containerID="e0beaf8f3888f3224e77b273d2e7d0fa1af0b12ba8a490fbd46da42f1ed82abe" Oct 11 08:02:37 crc kubenswrapper[5016]: I1011 08:02:37.282040 5016 scope.go:117] "RemoveContainer" containerID="ab81df700d429fccd65853a35c8ef3e51e5a83e1ea88d2a2880ed7c7c1be3b95" Oct 11 08:03:37 crc kubenswrapper[5016]: I1011 08:03:37.121767 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:03:37 crc kubenswrapper[5016]: I1011 08:03:37.122276 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:03:56 crc kubenswrapper[5016]: I1011 08:03:56.950002 5016 generic.go:334] "Generic (PLEG): container finished" podID="da5ee152-38b6-41b3-8c8c-c1051b5621f5" containerID="06df628d62c66d31a62667935d049e242739dee18059b3a6dd107b701600be1d" exitCode=0 Oct 11 08:03:56 crc kubenswrapper[5016]: I1011 08:03:56.950091 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" event={"ID":"da5ee152-38b6-41b3-8c8c-c1051b5621f5","Type":"ContainerDied","Data":"06df628d62c66d31a62667935d049e242739dee18059b3a6dd107b701600be1d"} Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.418574 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.567847 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kxtv\" (UniqueName: \"kubernetes.io/projected/da5ee152-38b6-41b3-8c8c-c1051b5621f5-kube-api-access-2kxtv\") pod \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.568044 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-bootstrap-combined-ca-bundle\") pod \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.568105 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-ssh-key\") pod \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.568179 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-inventory\") pod \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\" (UID: \"da5ee152-38b6-41b3-8c8c-c1051b5621f5\") " Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.573896 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da5ee152-38b6-41b3-8c8c-c1051b5621f5-kube-api-access-2kxtv" (OuterVolumeSpecName: "kube-api-access-2kxtv") pod "da5ee152-38b6-41b3-8c8c-c1051b5621f5" (UID: "da5ee152-38b6-41b3-8c8c-c1051b5621f5"). InnerVolumeSpecName "kube-api-access-2kxtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.574452 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "da5ee152-38b6-41b3-8c8c-c1051b5621f5" (UID: "da5ee152-38b6-41b3-8c8c-c1051b5621f5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.594785 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-inventory" (OuterVolumeSpecName: "inventory") pod "da5ee152-38b6-41b3-8c8c-c1051b5621f5" (UID: "da5ee152-38b6-41b3-8c8c-c1051b5621f5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.596362 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "da5ee152-38b6-41b3-8c8c-c1051b5621f5" (UID: "da5ee152-38b6-41b3-8c8c-c1051b5621f5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.672029 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.672124 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.672159 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kxtv\" (UniqueName: \"kubernetes.io/projected/da5ee152-38b6-41b3-8c8c-c1051b5621f5-kube-api-access-2kxtv\") on node \"crc\" DevicePath \"\"" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.672193 5016 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da5ee152-38b6-41b3-8c8c-c1051b5621f5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.977036 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" event={"ID":"da5ee152-38b6-41b3-8c8c-c1051b5621f5","Type":"ContainerDied","Data":"1fdf0e7fa91546d2c32b8921b05093af92fbf5e36cda0277f804bfbeb140ea0d"} Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.977084 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fdf0e7fa91546d2c32b8921b05093af92fbf5e36cda0277f804bfbeb140ea0d" Oct 11 08:03:58 crc kubenswrapper[5016]: I1011 08:03:58.977175 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.069878 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz"] Oct 11 08:03:59 crc kubenswrapper[5016]: E1011 08:03:59.070272 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b2ff87-f55d-42c6-8bfc-19b73cfe7582" containerName="keystone-cron" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.070293 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b2ff87-f55d-42c6-8bfc-19b73cfe7582" containerName="keystone-cron" Oct 11 08:03:59 crc kubenswrapper[5016]: E1011 08:03:59.070316 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da5ee152-38b6-41b3-8c8c-c1051b5621f5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.070325 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="da5ee152-38b6-41b3-8c8c-c1051b5621f5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.070484 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="da5ee152-38b6-41b3-8c8c-c1051b5621f5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.070504 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b2ff87-f55d-42c6-8bfc-19b73cfe7582" containerName="keystone-cron" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.071127 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.074075 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.074228 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.074309 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.074379 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.086538 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz"] Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.181969 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd5r4\" (UniqueName: \"kubernetes.io/projected/901099a9-9b33-4b7a-a393-25c97dff87b6-kube-api-access-dd5r4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-stzhz\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.182031 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-stzhz\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.182076 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-stzhz\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.284207 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd5r4\" (UniqueName: \"kubernetes.io/projected/901099a9-9b33-4b7a-a393-25c97dff87b6-kube-api-access-dd5r4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-stzhz\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.284302 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-stzhz\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.284382 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-stzhz\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.296368 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-stzhz\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.298440 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-stzhz\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.315026 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd5r4\" (UniqueName: \"kubernetes.io/projected/901099a9-9b33-4b7a-a393-25c97dff87b6-kube-api-access-dd5r4\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-stzhz\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.386965 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.935387 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz"] Oct 11 08:03:59 crc kubenswrapper[5016]: I1011 08:03:59.989425 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" event={"ID":"901099a9-9b33-4b7a-a393-25c97dff87b6","Type":"ContainerStarted","Data":"4e3234fabafda291be1b8e711803503c4cff29b34d546af960050fc559189c90"} Oct 11 08:04:01 crc kubenswrapper[5016]: I1011 08:04:01.005547 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" event={"ID":"901099a9-9b33-4b7a-a393-25c97dff87b6","Type":"ContainerStarted","Data":"b16f35fd4574311bece34fd97320002f83f216ff65a9373375a757a698e60bdb"} Oct 11 08:04:01 crc kubenswrapper[5016]: I1011 08:04:01.037871 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" podStartSLOduration=1.517882983 podStartE2EDuration="2.037847263s" podCreationTimestamp="2025-10-11 08:03:59 +0000 UTC" firstStartedPulling="2025-10-11 08:03:59.948053745 +0000 UTC m=+1427.848509691" lastFinishedPulling="2025-10-11 08:04:00.468018025 +0000 UTC m=+1428.368473971" observedRunningTime="2025-10-11 08:04:01.030721685 +0000 UTC m=+1428.931177691" watchObservedRunningTime="2025-10-11 08:04:01.037847263 +0000 UTC m=+1428.938303219" Oct 11 08:04:07 crc kubenswrapper[5016]: I1011 08:04:07.122348 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:04:07 crc kubenswrapper[5016]: I1011 08:04:07.123636 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:04:20 crc kubenswrapper[5016]: I1011 08:04:20.957714 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gbmgs"] Oct 11 08:04:20 crc kubenswrapper[5016]: I1011 08:04:20.960483 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:20 crc kubenswrapper[5016]: I1011 08:04:20.974752 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbmgs"] Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.057108 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xc8w\" (UniqueName: \"kubernetes.io/projected/bee6792c-24bd-4361-b534-7575a180f4f4-kube-api-access-2xc8w\") pod \"redhat-operators-gbmgs\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.057410 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-catalog-content\") pod \"redhat-operators-gbmgs\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.057476 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-utilities\") pod \"redhat-operators-gbmgs\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.158751 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-catalog-content\") pod \"redhat-operators-gbmgs\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.158834 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-utilities\") pod \"redhat-operators-gbmgs\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.158901 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xc8w\" (UniqueName: \"kubernetes.io/projected/bee6792c-24bd-4361-b534-7575a180f4f4-kube-api-access-2xc8w\") pod \"redhat-operators-gbmgs\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.159545 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-catalog-content\") pod \"redhat-operators-gbmgs\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.159589 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-utilities\") pod \"redhat-operators-gbmgs\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.184095 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xc8w\" (UniqueName: \"kubernetes.io/projected/bee6792c-24bd-4361-b534-7575a180f4f4-kube-api-access-2xc8w\") pod \"redhat-operators-gbmgs\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.288291 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:21 crc kubenswrapper[5016]: I1011 08:04:21.794731 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gbmgs"] Oct 11 08:04:22 crc kubenswrapper[5016]: I1011 08:04:22.227631 5016 generic.go:334] "Generic (PLEG): container finished" podID="bee6792c-24bd-4361-b534-7575a180f4f4" containerID="a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3" exitCode=0 Oct 11 08:04:22 crc kubenswrapper[5016]: I1011 08:04:22.227839 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmgs" event={"ID":"bee6792c-24bd-4361-b534-7575a180f4f4","Type":"ContainerDied","Data":"a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3"} Oct 11 08:04:22 crc kubenswrapper[5016]: I1011 08:04:22.228194 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmgs" event={"ID":"bee6792c-24bd-4361-b534-7575a180f4f4","Type":"ContainerStarted","Data":"56aafdcf930de06e2beecdb26726de54d46f413fe347e437a5b78f8722a3fc81"} Oct 11 08:04:24 crc kubenswrapper[5016]: I1011 08:04:24.291860 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmgs" event={"ID":"bee6792c-24bd-4361-b534-7575a180f4f4","Type":"ContainerStarted","Data":"fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420"} Oct 11 08:04:25 crc kubenswrapper[5016]: I1011 08:04:25.303888 5016 generic.go:334] "Generic (PLEG): container finished" podID="bee6792c-24bd-4361-b534-7575a180f4f4" containerID="fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420" exitCode=0 Oct 11 08:04:25 crc kubenswrapper[5016]: I1011 08:04:25.303969 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmgs" event={"ID":"bee6792c-24bd-4361-b534-7575a180f4f4","Type":"ContainerDied","Data":"fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420"} Oct 11 08:04:26 crc kubenswrapper[5016]: I1011 08:04:26.316965 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmgs" event={"ID":"bee6792c-24bd-4361-b534-7575a180f4f4","Type":"ContainerStarted","Data":"b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59"} Oct 11 08:04:26 crc kubenswrapper[5016]: I1011 08:04:26.350724 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gbmgs" podStartSLOduration=2.542370594 podStartE2EDuration="6.350705251s" podCreationTimestamp="2025-10-11 08:04:20 +0000 UTC" firstStartedPulling="2025-10-11 08:04:22.229937454 +0000 UTC m=+1450.130393410" lastFinishedPulling="2025-10-11 08:04:26.038272121 +0000 UTC m=+1453.938728067" observedRunningTime="2025-10-11 08:04:26.34911352 +0000 UTC m=+1454.249569476" watchObservedRunningTime="2025-10-11 08:04:26.350705251 +0000 UTC m=+1454.251161197" Oct 11 08:04:31 crc kubenswrapper[5016]: I1011 08:04:31.288848 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:31 crc kubenswrapper[5016]: I1011 08:04:31.289402 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:31 crc kubenswrapper[5016]: I1011 08:04:31.340846 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:31 crc kubenswrapper[5016]: I1011 08:04:31.424158 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:31 crc kubenswrapper[5016]: I1011 08:04:31.583273 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbmgs"] Oct 11 08:04:33 crc kubenswrapper[5016]: I1011 08:04:33.394887 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gbmgs" podUID="bee6792c-24bd-4361-b534-7575a180f4f4" containerName="registry-server" containerID="cri-o://b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59" gracePeriod=2 Oct 11 08:04:33 crc kubenswrapper[5016]: I1011 08:04:33.837389 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:33 crc kubenswrapper[5016]: I1011 08:04:33.920582 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-catalog-content\") pod \"bee6792c-24bd-4361-b534-7575a180f4f4\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " Oct 11 08:04:33 crc kubenswrapper[5016]: I1011 08:04:33.920730 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xc8w\" (UniqueName: \"kubernetes.io/projected/bee6792c-24bd-4361-b534-7575a180f4f4-kube-api-access-2xc8w\") pod \"bee6792c-24bd-4361-b534-7575a180f4f4\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " Oct 11 08:04:33 crc kubenswrapper[5016]: I1011 08:04:33.920797 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-utilities\") pod \"bee6792c-24bd-4361-b534-7575a180f4f4\" (UID: \"bee6792c-24bd-4361-b534-7575a180f4f4\") " Oct 11 08:04:33 crc kubenswrapper[5016]: I1011 08:04:33.921905 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-utilities" (OuterVolumeSpecName: "utilities") pod "bee6792c-24bd-4361-b534-7575a180f4f4" (UID: "bee6792c-24bd-4361-b534-7575a180f4f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:04:33 crc kubenswrapper[5016]: I1011 08:04:33.926614 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee6792c-24bd-4361-b534-7575a180f4f4-kube-api-access-2xc8w" (OuterVolumeSpecName: "kube-api-access-2xc8w") pod "bee6792c-24bd-4361-b534-7575a180f4f4" (UID: "bee6792c-24bd-4361-b534-7575a180f4f4"). InnerVolumeSpecName "kube-api-access-2xc8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.015865 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bee6792c-24bd-4361-b534-7575a180f4f4" (UID: "bee6792c-24bd-4361-b534-7575a180f4f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.023458 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xc8w\" (UniqueName: \"kubernetes.io/projected/bee6792c-24bd-4361-b534-7575a180f4f4-kube-api-access-2xc8w\") on node \"crc\" DevicePath \"\"" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.023487 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.023500 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bee6792c-24bd-4361-b534-7575a180f4f4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.405791 5016 generic.go:334] "Generic (PLEG): container finished" podID="bee6792c-24bd-4361-b534-7575a180f4f4" containerID="b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59" exitCode=0 Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.405839 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmgs" event={"ID":"bee6792c-24bd-4361-b534-7575a180f4f4","Type":"ContainerDied","Data":"b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59"} Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.405865 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gbmgs" event={"ID":"bee6792c-24bd-4361-b534-7575a180f4f4","Type":"ContainerDied","Data":"56aafdcf930de06e2beecdb26726de54d46f413fe347e437a5b78f8722a3fc81"} Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.405882 5016 scope.go:117] "RemoveContainer" containerID="b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.405978 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gbmgs" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.425814 5016 scope.go:117] "RemoveContainer" containerID="fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.453164 5016 scope.go:117] "RemoveContainer" containerID="a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.456730 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gbmgs"] Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.467128 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gbmgs"] Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.495116 5016 scope.go:117] "RemoveContainer" containerID="b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59" Oct 11 08:04:34 crc kubenswrapper[5016]: E1011 08:04:34.495718 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59\": container with ID starting with b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59 not found: ID does not exist" containerID="b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.495777 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59"} err="failed to get container status \"b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59\": rpc error: code = NotFound desc = could not find container \"b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59\": container with ID starting with b278c199880c119c7111dbef703ce3c98d7ed37ffb33d08357d3619ba4ac7d59 not found: ID does not exist" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.495815 5016 scope.go:117] "RemoveContainer" containerID="fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420" Oct 11 08:04:34 crc kubenswrapper[5016]: E1011 08:04:34.496142 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420\": container with ID starting with fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420 not found: ID does not exist" containerID="fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.496187 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420"} err="failed to get container status \"fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420\": rpc error: code = NotFound desc = could not find container \"fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420\": container with ID starting with fd046cb075b70cd3d099a0e4e950c17b0319792a98c8da5cfb7ef27475e0b420 not found: ID does not exist" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.496215 5016 scope.go:117] "RemoveContainer" containerID="a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3" Oct 11 08:04:34 crc kubenswrapper[5016]: E1011 08:04:34.496607 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3\": container with ID starting with a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3 not found: ID does not exist" containerID="a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3" Oct 11 08:04:34 crc kubenswrapper[5016]: I1011 08:04:34.496635 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3"} err="failed to get container status \"a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3\": rpc error: code = NotFound desc = could not find container \"a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3\": container with ID starting with a7bf308fa5d0e86f0c8b7ce5bd3892ef9ecc4951576eeba34d53ecd50846bdf3 not found: ID does not exist" Oct 11 08:04:35 crc kubenswrapper[5016]: I1011 08:04:35.153552 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee6792c-24bd-4361-b534-7575a180f4f4" path="/var/lib/kubelet/pods/bee6792c-24bd-4361-b534-7575a180f4f4/volumes" Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.122909 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.123813 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.123871 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.124703 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.124771 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" gracePeriod=600 Oct 11 08:04:37 crc kubenswrapper[5016]: E1011 08:04:37.258739 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.377395 5016 scope.go:117] "RemoveContainer" containerID="7474524bf9085ab2077a7c937f6785b1af41d09f4b0f089baeaea009169d0795" Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.412342 5016 scope.go:117] "RemoveContainer" containerID="cd2283546acdefe214fd3a94da11b1bd3c65710478adcdb13e2e78b9043496a9" Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.437213 5016 scope.go:117] "RemoveContainer" containerID="c00af0dadf80771a39f4f37ea4041dc4bdb8131ca6f8cd7e7c56134e849167c8" Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.443250 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" exitCode=0 Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.443317 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9"} Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.443361 5016 scope.go:117] "RemoveContainer" containerID="6e27921f9485ad7dd5682c9472508ee14b957ef603b8b328e13450c313534ce6" Oct 11 08:04:37 crc kubenswrapper[5016]: I1011 08:04:37.444285 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:04:37 crc kubenswrapper[5016]: E1011 08:04:37.444758 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:04:51 crc kubenswrapper[5016]: I1011 08:04:51.133299 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:04:51 crc kubenswrapper[5016]: E1011 08:04:51.134077 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.176102 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8jgv"] Oct 11 08:04:56 crc kubenswrapper[5016]: E1011 08:04:56.177109 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee6792c-24bd-4361-b534-7575a180f4f4" containerName="registry-server" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.177129 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee6792c-24bd-4361-b534-7575a180f4f4" containerName="registry-server" Oct 11 08:04:56 crc kubenswrapper[5016]: E1011 08:04:56.177161 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee6792c-24bd-4361-b534-7575a180f4f4" containerName="extract-utilities" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.177167 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee6792c-24bd-4361-b534-7575a180f4f4" containerName="extract-utilities" Oct 11 08:04:56 crc kubenswrapper[5016]: E1011 08:04:56.177181 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee6792c-24bd-4361-b534-7575a180f4f4" containerName="extract-content" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.177189 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee6792c-24bd-4361-b534-7575a180f4f4" containerName="extract-content" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.177417 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee6792c-24bd-4361-b534-7575a180f4f4" containerName="registry-server" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.178986 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.204485 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8jgv"] Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.293519 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgpln\" (UniqueName: \"kubernetes.io/projected/de9de774-f325-408e-bae1-c138f1e3b469-kube-api-access-sgpln\") pod \"certified-operators-z8jgv\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.293749 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-utilities\") pod \"certified-operators-z8jgv\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.293810 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-catalog-content\") pod \"certified-operators-z8jgv\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.395412 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-catalog-content\") pod \"certified-operators-z8jgv\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.395499 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgpln\" (UniqueName: \"kubernetes.io/projected/de9de774-f325-408e-bae1-c138f1e3b469-kube-api-access-sgpln\") pod \"certified-operators-z8jgv\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.395642 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-utilities\") pod \"certified-operators-z8jgv\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.396030 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-catalog-content\") pod \"certified-operators-z8jgv\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.396137 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-utilities\") pod \"certified-operators-z8jgv\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.432209 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgpln\" (UniqueName: \"kubernetes.io/projected/de9de774-f325-408e-bae1-c138f1e3b469-kube-api-access-sgpln\") pod \"certified-operators-z8jgv\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:56 crc kubenswrapper[5016]: I1011 08:04:56.507444 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:04:57 crc kubenswrapper[5016]: I1011 08:04:57.021624 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8jgv"] Oct 11 08:04:57 crc kubenswrapper[5016]: E1011 08:04:57.392140 5016 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde9de774_f325_408e_bae1_c138f1e3b469.slice/crio-conmon-69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a.scope\": RecentStats: unable to find data in memory cache]" Oct 11 08:04:57 crc kubenswrapper[5016]: I1011 08:04:57.690169 5016 generic.go:334] "Generic (PLEG): container finished" podID="de9de774-f325-408e-bae1-c138f1e3b469" containerID="69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a" exitCode=0 Oct 11 08:04:57 crc kubenswrapper[5016]: I1011 08:04:57.690258 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8jgv" event={"ID":"de9de774-f325-408e-bae1-c138f1e3b469","Type":"ContainerDied","Data":"69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a"} Oct 11 08:04:57 crc kubenswrapper[5016]: I1011 08:04:57.690542 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8jgv" event={"ID":"de9de774-f325-408e-bae1-c138f1e3b469","Type":"ContainerStarted","Data":"2e7c76d2fd97f56999423d6cc13ab48a0c8aecda4894cf670436ca80d67ef14b"} Oct 11 08:04:58 crc kubenswrapper[5016]: I1011 08:04:58.703435 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8jgv" event={"ID":"de9de774-f325-408e-bae1-c138f1e3b469","Type":"ContainerStarted","Data":"857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a"} Oct 11 08:04:59 crc kubenswrapper[5016]: I1011 08:04:59.714213 5016 generic.go:334] "Generic (PLEG): container finished" podID="de9de774-f325-408e-bae1-c138f1e3b469" containerID="857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a" exitCode=0 Oct 11 08:04:59 crc kubenswrapper[5016]: I1011 08:04:59.714582 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8jgv" event={"ID":"de9de774-f325-408e-bae1-c138f1e3b469","Type":"ContainerDied","Data":"857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a"} Oct 11 08:05:00 crc kubenswrapper[5016]: I1011 08:05:00.731169 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8jgv" event={"ID":"de9de774-f325-408e-bae1-c138f1e3b469","Type":"ContainerStarted","Data":"ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059"} Oct 11 08:05:00 crc kubenswrapper[5016]: I1011 08:05:00.752368 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8jgv" podStartSLOduration=2.337266483 podStartE2EDuration="4.752353568s" podCreationTimestamp="2025-10-11 08:04:56 +0000 UTC" firstStartedPulling="2025-10-11 08:04:57.692099527 +0000 UTC m=+1485.592555473" lastFinishedPulling="2025-10-11 08:05:00.107186572 +0000 UTC m=+1488.007642558" observedRunningTime="2025-10-11 08:05:00.748217429 +0000 UTC m=+1488.648673375" watchObservedRunningTime="2025-10-11 08:05:00.752353568 +0000 UTC m=+1488.652809504" Oct 11 08:05:04 crc kubenswrapper[5016]: I1011 08:05:04.133764 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:05:04 crc kubenswrapper[5016]: E1011 08:05:04.134319 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:05:06 crc kubenswrapper[5016]: I1011 08:05:06.509593 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:05:06 crc kubenswrapper[5016]: I1011 08:05:06.510257 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:05:06 crc kubenswrapper[5016]: I1011 08:05:06.561603 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:05:06 crc kubenswrapper[5016]: I1011 08:05:06.834045 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:05:06 crc kubenswrapper[5016]: I1011 08:05:06.883781 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8jgv"] Oct 11 08:05:08 crc kubenswrapper[5016]: I1011 08:05:08.816412 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z8jgv" podUID="de9de774-f325-408e-bae1-c138f1e3b469" containerName="registry-server" containerID="cri-o://ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059" gracePeriod=2 Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.096584 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-sbj8x"] Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.113061 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-sbj8x"] Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.147489 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfcd1056-e001-48cb-9588-9c664ae140a2" path="/var/lib/kubelet/pods/dfcd1056-e001-48cb-9588-9c664ae140a2/volumes" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.351761 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.390557 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgpln\" (UniqueName: \"kubernetes.io/projected/de9de774-f325-408e-bae1-c138f1e3b469-kube-api-access-sgpln\") pod \"de9de774-f325-408e-bae1-c138f1e3b469\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.398820 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9de774-f325-408e-bae1-c138f1e3b469-kube-api-access-sgpln" (OuterVolumeSpecName: "kube-api-access-sgpln") pod "de9de774-f325-408e-bae1-c138f1e3b469" (UID: "de9de774-f325-408e-bae1-c138f1e3b469"). InnerVolumeSpecName "kube-api-access-sgpln". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.491851 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-catalog-content\") pod \"de9de774-f325-408e-bae1-c138f1e3b469\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.491904 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-utilities\") pod \"de9de774-f325-408e-bae1-c138f1e3b469\" (UID: \"de9de774-f325-408e-bae1-c138f1e3b469\") " Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.492548 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgpln\" (UniqueName: \"kubernetes.io/projected/de9de774-f325-408e-bae1-c138f1e3b469-kube-api-access-sgpln\") on node \"crc\" DevicePath \"\"" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.493120 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-utilities" (OuterVolumeSpecName: "utilities") pod "de9de774-f325-408e-bae1-c138f1e3b469" (UID: "de9de774-f325-408e-bae1-c138f1e3b469"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.538763 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de9de774-f325-408e-bae1-c138f1e3b469" (UID: "de9de774-f325-408e-bae1-c138f1e3b469"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.594420 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.594454 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9de774-f325-408e-bae1-c138f1e3b469-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.828477 5016 generic.go:334] "Generic (PLEG): container finished" podID="de9de774-f325-408e-bae1-c138f1e3b469" containerID="ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059" exitCode=0 Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.828525 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8jgv" event={"ID":"de9de774-f325-408e-bae1-c138f1e3b469","Type":"ContainerDied","Data":"ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059"} Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.828554 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8jgv" event={"ID":"de9de774-f325-408e-bae1-c138f1e3b469","Type":"ContainerDied","Data":"2e7c76d2fd97f56999423d6cc13ab48a0c8aecda4894cf670436ca80d67ef14b"} Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.828574 5016 scope.go:117] "RemoveContainer" containerID="ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.828595 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8jgv" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.869924 5016 scope.go:117] "RemoveContainer" containerID="857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.899857 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8jgv"] Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.906710 5016 scope.go:117] "RemoveContainer" containerID="69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.920931 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z8jgv"] Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.963836 5016 scope.go:117] "RemoveContainer" containerID="ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059" Oct 11 08:05:09 crc kubenswrapper[5016]: E1011 08:05:09.964312 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059\": container with ID starting with ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059 not found: ID does not exist" containerID="ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.964395 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059"} err="failed to get container status \"ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059\": rpc error: code = NotFound desc = could not find container \"ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059\": container with ID starting with ef186f8fbb0eceeaa114c2d906e20f33d04de3e151a7fb5455019bd3e8e31059 not found: ID does not exist" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.964429 5016 scope.go:117] "RemoveContainer" containerID="857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a" Oct 11 08:05:09 crc kubenswrapper[5016]: E1011 08:05:09.964740 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a\": container with ID starting with 857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a not found: ID does not exist" containerID="857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.964777 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a"} err="failed to get container status \"857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a\": rpc error: code = NotFound desc = could not find container \"857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a\": container with ID starting with 857ed82f4bd33cb06de1e92d52945c7b6364891f6e34f7482f6074505849a90a not found: ID does not exist" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.964804 5016 scope.go:117] "RemoveContainer" containerID="69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a" Oct 11 08:05:09 crc kubenswrapper[5016]: E1011 08:05:09.965072 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a\": container with ID starting with 69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a not found: ID does not exist" containerID="69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a" Oct 11 08:05:09 crc kubenswrapper[5016]: I1011 08:05:09.965141 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a"} err="failed to get container status \"69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a\": rpc error: code = NotFound desc = could not find container \"69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a\": container with ID starting with 69cd7b71e8358f5414243c5fe9096ef017de92e446accdd30855654e89cb0a6a not found: ID does not exist" Oct 11 08:05:11 crc kubenswrapper[5016]: I1011 08:05:11.148516 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9de774-f325-408e-bae1-c138f1e3b469" path="/var/lib/kubelet/pods/de9de774-f325-408e-bae1-c138f1e3b469/volumes" Oct 11 08:05:14 crc kubenswrapper[5016]: I1011 08:05:14.045285 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6mk28"] Oct 11 08:05:14 crc kubenswrapper[5016]: I1011 08:05:14.061801 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-lqcqh"] Oct 11 08:05:14 crc kubenswrapper[5016]: I1011 08:05:14.076389 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6mk28"] Oct 11 08:05:14 crc kubenswrapper[5016]: I1011 08:05:14.094200 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-lqcqh"] Oct 11 08:05:14 crc kubenswrapper[5016]: I1011 08:05:14.895636 5016 generic.go:334] "Generic (PLEG): container finished" podID="901099a9-9b33-4b7a-a393-25c97dff87b6" containerID="b16f35fd4574311bece34fd97320002f83f216ff65a9373375a757a698e60bdb" exitCode=0 Oct 11 08:05:14 crc kubenswrapper[5016]: I1011 08:05:14.895725 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" event={"ID":"901099a9-9b33-4b7a-a393-25c97dff87b6","Type":"ContainerDied","Data":"b16f35fd4574311bece34fd97320002f83f216ff65a9373375a757a698e60bdb"} Oct 11 08:05:15 crc kubenswrapper[5016]: I1011 08:05:15.133287 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:05:15 crc kubenswrapper[5016]: E1011 08:05:15.133725 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:05:15 crc kubenswrapper[5016]: I1011 08:05:15.153606 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73eb1774-744a-4bd7-9f6f-dcf7e828bc4e" path="/var/lib/kubelet/pods/73eb1774-744a-4bd7-9f6f-dcf7e828bc4e/volumes" Oct 11 08:05:15 crc kubenswrapper[5016]: I1011 08:05:15.154701 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754058a4-0d11-41e2-8692-7365db46a03b" path="/var/lib/kubelet/pods/754058a4-0d11-41e2-8692-7365db46a03b/volumes" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.356088 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.529380 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-inventory\") pod \"901099a9-9b33-4b7a-a393-25c97dff87b6\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.529476 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd5r4\" (UniqueName: \"kubernetes.io/projected/901099a9-9b33-4b7a-a393-25c97dff87b6-kube-api-access-dd5r4\") pod \"901099a9-9b33-4b7a-a393-25c97dff87b6\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.529609 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-ssh-key\") pod \"901099a9-9b33-4b7a-a393-25c97dff87b6\" (UID: \"901099a9-9b33-4b7a-a393-25c97dff87b6\") " Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.538551 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901099a9-9b33-4b7a-a393-25c97dff87b6-kube-api-access-dd5r4" (OuterVolumeSpecName: "kube-api-access-dd5r4") pod "901099a9-9b33-4b7a-a393-25c97dff87b6" (UID: "901099a9-9b33-4b7a-a393-25c97dff87b6"). InnerVolumeSpecName "kube-api-access-dd5r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.561518 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "901099a9-9b33-4b7a-a393-25c97dff87b6" (UID: "901099a9-9b33-4b7a-a393-25c97dff87b6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.573178 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-inventory" (OuterVolumeSpecName: "inventory") pod "901099a9-9b33-4b7a-a393-25c97dff87b6" (UID: "901099a9-9b33-4b7a-a393-25c97dff87b6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.633631 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.633687 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd5r4\" (UniqueName: \"kubernetes.io/projected/901099a9-9b33-4b7a-a393-25c97dff87b6-kube-api-access-dd5r4\") on node \"crc\" DevicePath \"\"" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.633705 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/901099a9-9b33-4b7a-a393-25c97dff87b6-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.914313 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" event={"ID":"901099a9-9b33-4b7a-a393-25c97dff87b6","Type":"ContainerDied","Data":"4e3234fabafda291be1b8e711803503c4cff29b34d546af960050fc559189c90"} Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.914354 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e3234fabafda291be1b8e711803503c4cff29b34d546af960050fc559189c90" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.914387 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz" Oct 11 08:05:16 crc kubenswrapper[5016]: I1011 08:05:16.999757 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx"] Oct 11 08:05:17 crc kubenswrapper[5016]: E1011 08:05:17.000081 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9de774-f325-408e-bae1-c138f1e3b469" containerName="registry-server" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.000097 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9de774-f325-408e-bae1-c138f1e3b469" containerName="registry-server" Oct 11 08:05:17 crc kubenswrapper[5016]: E1011 08:05:17.000121 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901099a9-9b33-4b7a-a393-25c97dff87b6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.000128 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="901099a9-9b33-4b7a-a393-25c97dff87b6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:05:17 crc kubenswrapper[5016]: E1011 08:05:17.000149 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9de774-f325-408e-bae1-c138f1e3b469" containerName="extract-content" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.000156 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9de774-f325-408e-bae1-c138f1e3b469" containerName="extract-content" Oct 11 08:05:17 crc kubenswrapper[5016]: E1011 08:05:17.000171 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9de774-f325-408e-bae1-c138f1e3b469" containerName="extract-utilities" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.000177 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9de774-f325-408e-bae1-c138f1e3b469" containerName="extract-utilities" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.000325 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="901099a9-9b33-4b7a-a393-25c97dff87b6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.000352 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9de774-f325-408e-bae1-c138f1e3b469" containerName="registry-server" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.000932 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.003614 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.003782 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.003885 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.003626 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.024119 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx"] Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.141203 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.141285 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.141336 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7htkr\" (UniqueName: \"kubernetes.io/projected/09f63714-b15e-4c15-be93-c28413f234ff-kube-api-access-7htkr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.243219 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.243353 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.243419 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7htkr\" (UniqueName: \"kubernetes.io/projected/09f63714-b15e-4c15-be93-c28413f234ff-kube-api-access-7htkr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.257781 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.257860 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.262993 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7htkr\" (UniqueName: \"kubernetes.io/projected/09f63714-b15e-4c15-be93-c28413f234ff-kube-api-access-7htkr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.317095 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.663076 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx"] Oct 11 08:05:17 crc kubenswrapper[5016]: I1011 08:05:17.925027 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" event={"ID":"09f63714-b15e-4c15-be93-c28413f234ff","Type":"ContainerStarted","Data":"827f4c18ee3b258e055c37c0d4f1fca278e56dd7cbd47e43497e91e3c7d6a3e5"} Oct 11 08:05:18 crc kubenswrapper[5016]: I1011 08:05:18.936518 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" event={"ID":"09f63714-b15e-4c15-be93-c28413f234ff","Type":"ContainerStarted","Data":"d12658263a65ff9efe55ec757e654d1c873018ea9014003e416aefecc4703b9d"} Oct 11 08:05:18 crc kubenswrapper[5016]: I1011 08:05:18.967723 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" podStartSLOduration=2.368550233 podStartE2EDuration="2.967701705s" podCreationTimestamp="2025-10-11 08:05:16 +0000 UTC" firstStartedPulling="2025-10-11 08:05:17.665405405 +0000 UTC m=+1505.565861361" lastFinishedPulling="2025-10-11 08:05:18.264556877 +0000 UTC m=+1506.165012833" observedRunningTime="2025-10-11 08:05:18.954405283 +0000 UTC m=+1506.854861239" watchObservedRunningTime="2025-10-11 08:05:18.967701705 +0000 UTC m=+1506.868157671" Oct 11 08:05:19 crc kubenswrapper[5016]: I1011 08:05:19.029716 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f912-account-create-kmrpl"] Oct 11 08:05:19 crc kubenswrapper[5016]: I1011 08:05:19.036870 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f912-account-create-kmrpl"] Oct 11 08:05:19 crc kubenswrapper[5016]: I1011 08:05:19.144248 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a4c8ff5-2303-4034-a5ed-79f3a55c0e09" path="/var/lib/kubelet/pods/5a4c8ff5-2303-4034-a5ed-79f3a55c0e09/volumes" Oct 11 08:05:23 crc kubenswrapper[5016]: I1011 08:05:23.991008 5016 generic.go:334] "Generic (PLEG): container finished" podID="09f63714-b15e-4c15-be93-c28413f234ff" containerID="d12658263a65ff9efe55ec757e654d1c873018ea9014003e416aefecc4703b9d" exitCode=0 Oct 11 08:05:23 crc kubenswrapper[5016]: I1011 08:05:23.991151 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" event={"ID":"09f63714-b15e-4c15-be93-c28413f234ff","Type":"ContainerDied","Data":"d12658263a65ff9efe55ec757e654d1c873018ea9014003e416aefecc4703b9d"} Oct 11 08:05:24 crc kubenswrapper[5016]: I1011 08:05:24.041971 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e8e4-account-create-wl7gf"] Oct 11 08:05:24 crc kubenswrapper[5016]: I1011 08:05:24.053888 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-073c-account-create-9swf8"] Oct 11 08:05:24 crc kubenswrapper[5016]: I1011 08:05:24.060945 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e8e4-account-create-wl7gf"] Oct 11 08:05:24 crc kubenswrapper[5016]: I1011 08:05:24.067335 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-073c-account-create-9swf8"] Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.145839 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7" path="/var/lib/kubelet/pods/0427c1c7-53f5-4ea8-a0d2-fd33f27aa5b7/volumes" Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.146812 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18eb638-3529-417c-a8e1-e95af0025640" path="/var/lib/kubelet/pods/d18eb638-3529-417c-a8e1-e95af0025640/volumes" Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.406640 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.507055 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-ssh-key\") pod \"09f63714-b15e-4c15-be93-c28413f234ff\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.507458 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-inventory\") pod \"09f63714-b15e-4c15-be93-c28413f234ff\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.507509 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7htkr\" (UniqueName: \"kubernetes.io/projected/09f63714-b15e-4c15-be93-c28413f234ff-kube-api-access-7htkr\") pod \"09f63714-b15e-4c15-be93-c28413f234ff\" (UID: \"09f63714-b15e-4c15-be93-c28413f234ff\") " Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.514366 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f63714-b15e-4c15-be93-c28413f234ff-kube-api-access-7htkr" (OuterVolumeSpecName: "kube-api-access-7htkr") pod "09f63714-b15e-4c15-be93-c28413f234ff" (UID: "09f63714-b15e-4c15-be93-c28413f234ff"). InnerVolumeSpecName "kube-api-access-7htkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.541680 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "09f63714-b15e-4c15-be93-c28413f234ff" (UID: "09f63714-b15e-4c15-be93-c28413f234ff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.541705 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-inventory" (OuterVolumeSpecName: "inventory") pod "09f63714-b15e-4c15-be93-c28413f234ff" (UID: "09f63714-b15e-4c15-be93-c28413f234ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.610106 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.610164 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7htkr\" (UniqueName: \"kubernetes.io/projected/09f63714-b15e-4c15-be93-c28413f234ff-kube-api-access-7htkr\") on node \"crc\" DevicePath \"\"" Oct 11 08:05:25 crc kubenswrapper[5016]: I1011 08:05:25.610179 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f63714-b15e-4c15-be93-c28413f234ff-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.008217 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" event={"ID":"09f63714-b15e-4c15-be93-c28413f234ff","Type":"ContainerDied","Data":"827f4c18ee3b258e055c37c0d4f1fca278e56dd7cbd47e43497e91e3c7d6a3e5"} Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.008253 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827f4c18ee3b258e055c37c0d4f1fca278e56dd7cbd47e43497e91e3c7d6a3e5" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.008300 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.127161 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88"] Oct 11 08:05:26 crc kubenswrapper[5016]: E1011 08:05:26.127544 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f63714-b15e-4c15-be93-c28413f234ff" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.127566 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f63714-b15e-4c15-be93-c28413f234ff" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.127775 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f63714-b15e-4c15-be93-c28413f234ff" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.129460 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.147252 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.147435 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.147733 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.147907 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.170057 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88"] Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.219908 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q7lw\" (UniqueName: \"kubernetes.io/projected/7855134b-9e53-43a5-ac30-b63db80d9231-kube-api-access-9q7lw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2bt88\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.220039 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2bt88\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.220073 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2bt88\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.321975 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2bt88\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.322082 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q7lw\" (UniqueName: \"kubernetes.io/projected/7855134b-9e53-43a5-ac30-b63db80d9231-kube-api-access-9q7lw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2bt88\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.322176 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2bt88\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.325453 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2bt88\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.325584 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2bt88\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.336931 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q7lw\" (UniqueName: \"kubernetes.io/projected/7855134b-9e53-43a5-ac30-b63db80d9231-kube-api-access-9q7lw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2bt88\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.484093 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:05:26 crc kubenswrapper[5016]: I1011 08:05:26.837139 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88"] Oct 11 08:05:27 crc kubenswrapper[5016]: I1011 08:05:27.018621 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" event={"ID":"7855134b-9e53-43a5-ac30-b63db80d9231","Type":"ContainerStarted","Data":"b8f9517922a45f91f230bf16905c8aaca1a27486d1a739001038a7dfc19a89f6"} Oct 11 08:05:28 crc kubenswrapper[5016]: I1011 08:05:28.031662 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" event={"ID":"7855134b-9e53-43a5-ac30-b63db80d9231","Type":"ContainerStarted","Data":"e79ecb022f4e4871aad5b3263de7e7f0c8baa897b9f31b7851a0a81841cf9073"} Oct 11 08:05:28 crc kubenswrapper[5016]: I1011 08:05:28.047898 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" podStartSLOduration=1.62350141 podStartE2EDuration="2.047878896s" podCreationTimestamp="2025-10-11 08:05:26 +0000 UTC" firstStartedPulling="2025-10-11 08:05:26.841980482 +0000 UTC m=+1514.742436438" lastFinishedPulling="2025-10-11 08:05:27.266357978 +0000 UTC m=+1515.166813924" observedRunningTime="2025-10-11 08:05:28.04764812 +0000 UTC m=+1515.948104096" watchObservedRunningTime="2025-10-11 08:05:28.047878896 +0000 UTC m=+1515.948334842" Oct 11 08:05:29 crc kubenswrapper[5016]: I1011 08:05:29.134416 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:05:29 crc kubenswrapper[5016]: E1011 08:05:29.136411 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:05:37 crc kubenswrapper[5016]: I1011 08:05:37.558558 5016 scope.go:117] "RemoveContainer" containerID="fed0d6bafad5c985f7d7212339de3dffe347421dc73b13ce21c08de309376438" Oct 11 08:05:37 crc kubenswrapper[5016]: I1011 08:05:37.594179 5016 scope.go:117] "RemoveContainer" containerID="dc3b9aba37b812e9adbc72c37c1633f4dd94e95327f3f40832c0de8ef91d5a40" Oct 11 08:05:37 crc kubenswrapper[5016]: I1011 08:05:37.647337 5016 scope.go:117] "RemoveContainer" containerID="db8659dda4eed8d0787579d6eef7a57a1ab40b4533ec07186356b9cb99ab7b55" Oct 11 08:05:37 crc kubenswrapper[5016]: I1011 08:05:37.673635 5016 scope.go:117] "RemoveContainer" containerID="00ab27166a916c7cded3f6bbe9f7c9eebdae8d377b70503fb0f38b6f16a3ad97" Oct 11 08:05:37 crc kubenswrapper[5016]: I1011 08:05:37.723130 5016 scope.go:117] "RemoveContainer" containerID="6306ec307114e7fe5459c3b243199e1d9ad702e916aa6df858fe468f0a12a30a" Oct 11 08:05:37 crc kubenswrapper[5016]: I1011 08:05:37.762318 5016 scope.go:117] "RemoveContainer" containerID="7182ffebc9f6564ed5079321d2e3b3816b9e139ec3a4b838cf34c49fbd56b9e4" Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.060274 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-zvgbq"] Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.069391 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-b7lpm"] Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.078747 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-66h42"] Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.090210 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-66h42"] Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.099845 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-zvgbq"] Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.110476 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-b7lpm"] Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.133704 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:05:41 crc kubenswrapper[5016]: E1011 08:05:41.134167 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.147729 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c2cbade-3503-443a-93b5-17e53e532a6c" path="/var/lib/kubelet/pods/7c2cbade-3503-443a-93b5-17e53e532a6c/volumes" Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.148748 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf48b893-4872-446b-9e65-d7f16bd21b40" path="/var/lib/kubelet/pods/cf48b893-4872-446b-9e65-d7f16bd21b40/volumes" Oct 11 08:05:41 crc kubenswrapper[5016]: I1011 08:05:41.149394 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a3bd97-03dd-4dcb-9538-76eba3893f60" path="/var/lib/kubelet/pods/d8a3bd97-03dd-4dcb-9538-76eba3893f60/volumes" Oct 11 08:05:46 crc kubenswrapper[5016]: I1011 08:05:46.029641 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wnxjs"] Oct 11 08:05:46 crc kubenswrapper[5016]: I1011 08:05:46.038944 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wnxjs"] Oct 11 08:05:47 crc kubenswrapper[5016]: I1011 08:05:47.164903 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5718a79-8ed4-45db-bcc0-f11946055cc0" path="/var/lib/kubelet/pods/c5718a79-8ed4-45db-bcc0-f11946055cc0/volumes" Oct 11 08:05:51 crc kubenswrapper[5016]: I1011 08:05:51.056831 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qjqc4"] Oct 11 08:05:51 crc kubenswrapper[5016]: I1011 08:05:51.068630 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qjqc4"] Oct 11 08:05:51 crc kubenswrapper[5016]: I1011 08:05:51.149240 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0af02720-0f53-4774-b530-4fb491f32429" path="/var/lib/kubelet/pods/0af02720-0f53-4774-b530-4fb491f32429/volumes" Oct 11 08:05:52 crc kubenswrapper[5016]: I1011 08:05:52.035178 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-139c-account-create-khk7x"] Oct 11 08:05:52 crc kubenswrapper[5016]: I1011 08:05:52.045027 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-aa8f-account-create-xrlsz"] Oct 11 08:05:52 crc kubenswrapper[5016]: I1011 08:05:52.054613 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5630-account-create-wbhsr"] Oct 11 08:05:52 crc kubenswrapper[5016]: I1011 08:05:52.066604 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-139c-account-create-khk7x"] Oct 11 08:05:52 crc kubenswrapper[5016]: I1011 08:05:52.074353 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5630-account-create-wbhsr"] Oct 11 08:05:52 crc kubenswrapper[5016]: I1011 08:05:52.081510 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-aa8f-account-create-xrlsz"] Oct 11 08:05:52 crc kubenswrapper[5016]: I1011 08:05:52.133295 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:05:52 crc kubenswrapper[5016]: E1011 08:05:52.133508 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:05:53 crc kubenswrapper[5016]: I1011 08:05:53.141948 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f17f09-937d-4e0c-8a19-a6d6770e6d89" path="/var/lib/kubelet/pods/20f17f09-937d-4e0c-8a19-a6d6770e6d89/volumes" Oct 11 08:05:53 crc kubenswrapper[5016]: I1011 08:05:53.143327 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c1b5c66-e73e-4029-b13e-dad61f734028" path="/var/lib/kubelet/pods/6c1b5c66-e73e-4029-b13e-dad61f734028/volumes" Oct 11 08:05:53 crc kubenswrapper[5016]: I1011 08:05:53.144022 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b391075-226a-4652-998b-a896edf77c08" path="/var/lib/kubelet/pods/8b391075-226a-4652-998b-a896edf77c08/volumes" Oct 11 08:06:05 crc kubenswrapper[5016]: I1011 08:06:05.134709 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:06:05 crc kubenswrapper[5016]: E1011 08:06:05.135894 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:06:09 crc kubenswrapper[5016]: I1011 08:06:09.467115 5016 generic.go:334] "Generic (PLEG): container finished" podID="7855134b-9e53-43a5-ac30-b63db80d9231" containerID="e79ecb022f4e4871aad5b3263de7e7f0c8baa897b9f31b7851a0a81841cf9073" exitCode=0 Oct 11 08:06:09 crc kubenswrapper[5016]: I1011 08:06:09.467233 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" event={"ID":"7855134b-9e53-43a5-ac30-b63db80d9231","Type":"ContainerDied","Data":"e79ecb022f4e4871aad5b3263de7e7f0c8baa897b9f31b7851a0a81841cf9073"} Oct 11 08:06:10 crc kubenswrapper[5016]: I1011 08:06:10.971495 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.082347 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-ssh-key\") pod \"7855134b-9e53-43a5-ac30-b63db80d9231\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.082414 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-inventory\") pod \"7855134b-9e53-43a5-ac30-b63db80d9231\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.082447 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q7lw\" (UniqueName: \"kubernetes.io/projected/7855134b-9e53-43a5-ac30-b63db80d9231-kube-api-access-9q7lw\") pod \"7855134b-9e53-43a5-ac30-b63db80d9231\" (UID: \"7855134b-9e53-43a5-ac30-b63db80d9231\") " Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.092057 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7855134b-9e53-43a5-ac30-b63db80d9231-kube-api-access-9q7lw" (OuterVolumeSpecName: "kube-api-access-9q7lw") pod "7855134b-9e53-43a5-ac30-b63db80d9231" (UID: "7855134b-9e53-43a5-ac30-b63db80d9231"). InnerVolumeSpecName "kube-api-access-9q7lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.121174 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7855134b-9e53-43a5-ac30-b63db80d9231" (UID: "7855134b-9e53-43a5-ac30-b63db80d9231"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.131797 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-inventory" (OuterVolumeSpecName: "inventory") pod "7855134b-9e53-43a5-ac30-b63db80d9231" (UID: "7855134b-9e53-43a5-ac30-b63db80d9231"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.185196 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.185246 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7855134b-9e53-43a5-ac30-b63db80d9231-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.185257 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q7lw\" (UniqueName: \"kubernetes.io/projected/7855134b-9e53-43a5-ac30-b63db80d9231-kube-api-access-9q7lw\") on node \"crc\" DevicePath \"\"" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.495234 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" event={"ID":"7855134b-9e53-43a5-ac30-b63db80d9231","Type":"ContainerDied","Data":"b8f9517922a45f91f230bf16905c8aaca1a27486d1a739001038a7dfc19a89f6"} Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.495278 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f9517922a45f91f230bf16905c8aaca1a27486d1a739001038a7dfc19a89f6" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.495334 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.576079 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq"] Oct 11 08:06:11 crc kubenswrapper[5016]: E1011 08:06:11.576556 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7855134b-9e53-43a5-ac30-b63db80d9231" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.576579 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="7855134b-9e53-43a5-ac30-b63db80d9231" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.576819 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="7855134b-9e53-43a5-ac30-b63db80d9231" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.577561 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.584679 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq"] Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.584887 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.584984 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.585008 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.585098 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.693088 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.693293 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.693396 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mj98\" (UniqueName: \"kubernetes.io/projected/1af6861b-140d-4431-98e7-c47b7d4c9a3d-kube-api-access-4mj98\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.795151 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.795709 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.795932 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mj98\" (UniqueName: \"kubernetes.io/projected/1af6861b-140d-4431-98e7-c47b7d4c9a3d-kube-api-access-4mj98\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.801975 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.802091 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.833272 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mj98\" (UniqueName: \"kubernetes.io/projected/1af6861b-140d-4431-98e7-c47b7d4c9a3d-kube-api-access-4mj98\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:11 crc kubenswrapper[5016]: I1011 08:06:11.906668 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:12 crc kubenswrapper[5016]: I1011 08:06:12.462266 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq"] Oct 11 08:06:12 crc kubenswrapper[5016]: I1011 08:06:12.466117 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 08:06:12 crc kubenswrapper[5016]: I1011 08:06:12.504082 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" event={"ID":"1af6861b-140d-4431-98e7-c47b7d4c9a3d","Type":"ContainerStarted","Data":"75637d63fb844562bf5d2ce6eb4bbdbdff039e981b87db0ba069384e0e8db292"} Oct 11 08:06:13 crc kubenswrapper[5016]: I1011 08:06:13.514438 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" event={"ID":"1af6861b-140d-4431-98e7-c47b7d4c9a3d","Type":"ContainerStarted","Data":"06db77aa2e463ea5df94c514d3ff0f0090ccbf2ece9bae922c0a46f9b1fe730e"} Oct 11 08:06:13 crc kubenswrapper[5016]: I1011 08:06:13.534861 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" podStartSLOduration=2.109284035 podStartE2EDuration="2.534840366s" podCreationTimestamp="2025-10-11 08:06:11 +0000 UTC" firstStartedPulling="2025-10-11 08:06:12.465938732 +0000 UTC m=+1560.366394678" lastFinishedPulling="2025-10-11 08:06:12.891495063 +0000 UTC m=+1560.791951009" observedRunningTime="2025-10-11 08:06:13.533600624 +0000 UTC m=+1561.434056570" watchObservedRunningTime="2025-10-11 08:06:13.534840366 +0000 UTC m=+1561.435296322" Oct 11 08:06:17 crc kubenswrapper[5016]: I1011 08:06:17.555299 5016 generic.go:334] "Generic (PLEG): container finished" podID="1af6861b-140d-4431-98e7-c47b7d4c9a3d" containerID="06db77aa2e463ea5df94c514d3ff0f0090ccbf2ece9bae922c0a46f9b1fe730e" exitCode=0 Oct 11 08:06:17 crc kubenswrapper[5016]: I1011 08:06:17.555385 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" event={"ID":"1af6861b-140d-4431-98e7-c47b7d4c9a3d","Type":"ContainerDied","Data":"06db77aa2e463ea5df94c514d3ff0f0090ccbf2ece9bae922c0a46f9b1fe730e"} Oct 11 08:06:18 crc kubenswrapper[5016]: I1011 08:06:18.134479 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:06:18 crc kubenswrapper[5016]: E1011 08:06:18.135393 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.057158 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.134373 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-ssh-key\") pod \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.134706 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mj98\" (UniqueName: \"kubernetes.io/projected/1af6861b-140d-4431-98e7-c47b7d4c9a3d-kube-api-access-4mj98\") pod \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.134781 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-inventory\") pod \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\" (UID: \"1af6861b-140d-4431-98e7-c47b7d4c9a3d\") " Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.141649 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af6861b-140d-4431-98e7-c47b7d4c9a3d-kube-api-access-4mj98" (OuterVolumeSpecName: "kube-api-access-4mj98") pod "1af6861b-140d-4431-98e7-c47b7d4c9a3d" (UID: "1af6861b-140d-4431-98e7-c47b7d4c9a3d"). InnerVolumeSpecName "kube-api-access-4mj98". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.168642 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-inventory" (OuterVolumeSpecName: "inventory") pod "1af6861b-140d-4431-98e7-c47b7d4c9a3d" (UID: "1af6861b-140d-4431-98e7-c47b7d4c9a3d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.189781 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1af6861b-140d-4431-98e7-c47b7d4c9a3d" (UID: "1af6861b-140d-4431-98e7-c47b7d4c9a3d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.238305 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.238355 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mj98\" (UniqueName: \"kubernetes.io/projected/1af6861b-140d-4431-98e7-c47b7d4c9a3d-kube-api-access-4mj98\") on node \"crc\" DevicePath \"\"" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.238376 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1af6861b-140d-4431-98e7-c47b7d4c9a3d-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.580394 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" event={"ID":"1af6861b-140d-4431-98e7-c47b7d4c9a3d","Type":"ContainerDied","Data":"75637d63fb844562bf5d2ce6eb4bbdbdff039e981b87db0ba069384e0e8db292"} Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.580451 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.580460 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75637d63fb844562bf5d2ce6eb4bbdbdff039e981b87db0ba069384e0e8db292" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.663017 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq"] Oct 11 08:06:19 crc kubenswrapper[5016]: E1011 08:06:19.663592 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1af6861b-140d-4431-98e7-c47b7d4c9a3d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.663628 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1af6861b-140d-4431-98e7-c47b7d4c9a3d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.663995 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af6861b-140d-4431-98e7-c47b7d4c9a3d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.664941 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.669927 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.669927 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.669927 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.670488 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.671997 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq"] Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.747366 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw4q5\" (UniqueName: \"kubernetes.io/projected/9136e11e-b30a-4619-82aa-fac539c76b6f-kube-api-access-bw4q5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.747678 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.747797 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.849836 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.850376 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw4q5\" (UniqueName: \"kubernetes.io/projected/9136e11e-b30a-4619-82aa-fac539c76b6f-kube-api-access-bw4q5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.850878 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.855103 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.855376 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.876670 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw4q5\" (UniqueName: \"kubernetes.io/projected/9136e11e-b30a-4619-82aa-fac539c76b6f-kube-api-access-bw4q5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:19 crc kubenswrapper[5016]: I1011 08:06:19.985138 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:06:20 crc kubenswrapper[5016]: I1011 08:06:20.558474 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq"] Oct 11 08:06:20 crc kubenswrapper[5016]: I1011 08:06:20.588541 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" event={"ID":"9136e11e-b30a-4619-82aa-fac539c76b6f","Type":"ContainerStarted","Data":"134df8ef896f9dcd23efcd543b40f73cb6e86f264ff41e176499a31e92668363"} Oct 11 08:06:21 crc kubenswrapper[5016]: I1011 08:06:21.604368 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" event={"ID":"9136e11e-b30a-4619-82aa-fac539c76b6f","Type":"ContainerStarted","Data":"813956b703d2027fd7a6771cf0819631bf908105d43210daafb51c1b692ef1fa"} Oct 11 08:06:21 crc kubenswrapper[5016]: I1011 08:06:21.624475 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" podStartSLOduration=2.207684769 podStartE2EDuration="2.624432445s" podCreationTimestamp="2025-10-11 08:06:19 +0000 UTC" firstStartedPulling="2025-10-11 08:06:20.563839411 +0000 UTC m=+1568.464295367" lastFinishedPulling="2025-10-11 08:06:20.980587057 +0000 UTC m=+1568.881043043" observedRunningTime="2025-10-11 08:06:21.624205919 +0000 UTC m=+1569.524661875" watchObservedRunningTime="2025-10-11 08:06:21.624432445 +0000 UTC m=+1569.524888401" Oct 11 08:06:29 crc kubenswrapper[5016]: I1011 08:06:29.133567 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:06:29 crc kubenswrapper[5016]: E1011 08:06:29.134661 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:06:33 crc kubenswrapper[5016]: I1011 08:06:33.048049 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-rfhnk"] Oct 11 08:06:33 crc kubenswrapper[5016]: I1011 08:06:33.064164 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-rfhnk"] Oct 11 08:06:33 crc kubenswrapper[5016]: I1011 08:06:33.147596 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492cebf0-6a35-4ce7-8c85-2298fd8ae390" path="/var/lib/kubelet/pods/492cebf0-6a35-4ce7-8c85-2298fd8ae390/volumes" Oct 11 08:06:37 crc kubenswrapper[5016]: I1011 08:06:37.910444 5016 scope.go:117] "RemoveContainer" containerID="4c4e5b8562578503bd7b5f852f1b5cfe464a9ab52cb68cc3bc1518c7b647d721" Oct 11 08:06:37 crc kubenswrapper[5016]: I1011 08:06:37.940891 5016 scope.go:117] "RemoveContainer" containerID="c0fc3d0a66d32bf0d78d52a2e85007d333a797aa0fec504d4d9603461f524393" Oct 11 08:06:38 crc kubenswrapper[5016]: I1011 08:06:38.001971 5016 scope.go:117] "RemoveContainer" containerID="1283f78561336a5baea64fc5756d5200677eb384d3fbacfd5aad4fdec93d1f00" Oct 11 08:06:38 crc kubenswrapper[5016]: I1011 08:06:38.037522 5016 scope.go:117] "RemoveContainer" containerID="1be38acd17148c7ae79f88913247b88f2de87f4b718f1524edc6861c556cbc9a" Oct 11 08:06:38 crc kubenswrapper[5016]: I1011 08:06:38.071700 5016 scope.go:117] "RemoveContainer" containerID="896ef9b7444a5953067b8a8a09d44edce31f4219fb06b445ed88be3db8489e9c" Oct 11 08:06:38 crc kubenswrapper[5016]: I1011 08:06:38.125519 5016 scope.go:117] "RemoveContainer" containerID="a886423cdd9bcd98de816e9f95a0d23846ee88f4c02c4e9fc7626c387bede0db" Oct 11 08:06:38 crc kubenswrapper[5016]: I1011 08:06:38.147295 5016 scope.go:117] "RemoveContainer" containerID="966e584cbe44bbdf557a77e0bfabe996eb0038bb985a546ff199d2f564cc5db3" Oct 11 08:06:38 crc kubenswrapper[5016]: I1011 08:06:38.169759 5016 scope.go:117] "RemoveContainer" containerID="50805f5ac2adb2446075235e3c85e2775c2ce3a94b2b04889f3295edebd66a41" Oct 11 08:06:38 crc kubenswrapper[5016]: I1011 08:06:38.188698 5016 scope.go:117] "RemoveContainer" containerID="4e292029c9be340a8aa5bfb997745320e4060a28e6ade7a360225c2ba9aa8f75" Oct 11 08:06:41 crc kubenswrapper[5016]: I1011 08:06:41.133146 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:06:41 crc kubenswrapper[5016]: E1011 08:06:41.133881 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:06:43 crc kubenswrapper[5016]: I1011 08:06:43.040944 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nldfs"] Oct 11 08:06:43 crc kubenswrapper[5016]: I1011 08:06:43.053934 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nldfs"] Oct 11 08:06:43 crc kubenswrapper[5016]: I1011 08:06:43.148290 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="823dcaff-824b-4313-93d8-91967861aeca" path="/var/lib/kubelet/pods/823dcaff-824b-4313-93d8-91967861aeca/volumes" Oct 11 08:06:47 crc kubenswrapper[5016]: I1011 08:06:47.044492 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-w78ms"] Oct 11 08:06:47 crc kubenswrapper[5016]: I1011 08:06:47.056034 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-w78ms"] Oct 11 08:06:47 crc kubenswrapper[5016]: I1011 08:06:47.146334 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78f643d-d3c2-4cf1-8bb3-ee749e569273" path="/var/lib/kubelet/pods/b78f643d-d3c2-4cf1-8bb3-ee749e569273/volumes" Oct 11 08:06:53 crc kubenswrapper[5016]: I1011 08:06:53.028361 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-92qrj"] Oct 11 08:06:53 crc kubenswrapper[5016]: I1011 08:06:53.039092 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-92qrj"] Oct 11 08:06:53 crc kubenswrapper[5016]: I1011 08:06:53.142738 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:06:53 crc kubenswrapper[5016]: E1011 08:06:53.142953 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:06:53 crc kubenswrapper[5016]: I1011 08:06:53.144104 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d426ddd3-5eae-4816-a141-32b614642d39" path="/var/lib/kubelet/pods/d426ddd3-5eae-4816-a141-32b614642d39/volumes" Oct 11 08:06:54 crc kubenswrapper[5016]: I1011 08:06:54.025284 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xqmfx"] Oct 11 08:06:54 crc kubenswrapper[5016]: I1011 08:06:54.032104 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xqmfx"] Oct 11 08:06:55 crc kubenswrapper[5016]: I1011 08:06:55.147377 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebaa0ef-dce1-4ff4-a51c-69435ca86699" path="/var/lib/kubelet/pods/8ebaa0ef-dce1-4ff4-a51c-69435ca86699/volumes" Oct 11 08:07:04 crc kubenswrapper[5016]: I1011 08:07:04.133525 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:07:04 crc kubenswrapper[5016]: E1011 08:07:04.134429 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:07:16 crc kubenswrapper[5016]: I1011 08:07:16.133577 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:07:16 crc kubenswrapper[5016]: E1011 08:07:16.134424 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:07:20 crc kubenswrapper[5016]: I1011 08:07:20.206746 5016 generic.go:334] "Generic (PLEG): container finished" podID="9136e11e-b30a-4619-82aa-fac539c76b6f" containerID="813956b703d2027fd7a6771cf0819631bf908105d43210daafb51c1b692ef1fa" exitCode=2 Oct 11 08:07:20 crc kubenswrapper[5016]: I1011 08:07:20.206869 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" event={"ID":"9136e11e-b30a-4619-82aa-fac539c76b6f","Type":"ContainerDied","Data":"813956b703d2027fd7a6771cf0819631bf908105d43210daafb51c1b692ef1fa"} Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.619102 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.689384 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-ssh-key\") pod \"9136e11e-b30a-4619-82aa-fac539c76b6f\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.689631 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw4q5\" (UniqueName: \"kubernetes.io/projected/9136e11e-b30a-4619-82aa-fac539c76b6f-kube-api-access-bw4q5\") pod \"9136e11e-b30a-4619-82aa-fac539c76b6f\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.689918 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-inventory\") pod \"9136e11e-b30a-4619-82aa-fac539c76b6f\" (UID: \"9136e11e-b30a-4619-82aa-fac539c76b6f\") " Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.697315 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9136e11e-b30a-4619-82aa-fac539c76b6f-kube-api-access-bw4q5" (OuterVolumeSpecName: "kube-api-access-bw4q5") pod "9136e11e-b30a-4619-82aa-fac539c76b6f" (UID: "9136e11e-b30a-4619-82aa-fac539c76b6f"). InnerVolumeSpecName "kube-api-access-bw4q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.727806 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-inventory" (OuterVolumeSpecName: "inventory") pod "9136e11e-b30a-4619-82aa-fac539c76b6f" (UID: "9136e11e-b30a-4619-82aa-fac539c76b6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.729847 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9136e11e-b30a-4619-82aa-fac539c76b6f" (UID: "9136e11e-b30a-4619-82aa-fac539c76b6f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.792820 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.793071 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9136e11e-b30a-4619-82aa-fac539c76b6f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:07:21 crc kubenswrapper[5016]: I1011 08:07:21.793143 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw4q5\" (UniqueName: \"kubernetes.io/projected/9136e11e-b30a-4619-82aa-fac539c76b6f-kube-api-access-bw4q5\") on node \"crc\" DevicePath \"\"" Oct 11 08:07:22 crc kubenswrapper[5016]: I1011 08:07:22.225869 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" event={"ID":"9136e11e-b30a-4619-82aa-fac539c76b6f","Type":"ContainerDied","Data":"134df8ef896f9dcd23efcd543b40f73cb6e86f264ff41e176499a31e92668363"} Oct 11 08:07:22 crc kubenswrapper[5016]: I1011 08:07:22.226193 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="134df8ef896f9dcd23efcd543b40f73cb6e86f264ff41e176499a31e92668363" Oct 11 08:07:22 crc kubenswrapper[5016]: I1011 08:07:22.225910 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq" Oct 11 08:07:27 crc kubenswrapper[5016]: I1011 08:07:27.133577 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:07:27 crc kubenswrapper[5016]: E1011 08:07:27.134176 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.039624 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j"] Oct 11 08:07:30 crc kubenswrapper[5016]: E1011 08:07:30.040486 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9136e11e-b30a-4619-82aa-fac539c76b6f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.040526 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="9136e11e-b30a-4619-82aa-fac539c76b6f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.040885 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="9136e11e-b30a-4619-82aa-fac539c76b6f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.043339 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.046061 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.046305 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.046441 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.046733 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.062080 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j"] Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.158502 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpgsw\" (UniqueName: \"kubernetes.io/projected/3ed8ab18-67d4-43ad-8722-2add05f17fa6-kube-api-access-gpgsw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.158819 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.159005 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.260510 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.260596 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpgsw\" (UniqueName: \"kubernetes.io/projected/3ed8ab18-67d4-43ad-8722-2add05f17fa6-kube-api-access-gpgsw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.260717 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.270957 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.272555 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.282313 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpgsw\" (UniqueName: \"kubernetes.io/projected/3ed8ab18-67d4-43ad-8722-2add05f17fa6-kube-api-access-gpgsw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.369158 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:07:30 crc kubenswrapper[5016]: I1011 08:07:30.925448 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j"] Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.060625 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-88p4q"] Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.068044 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-5d7xn"] Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.075295 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mcv5t"] Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.084023 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-88p4q"] Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.090724 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-5d7xn"] Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.097037 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mcv5t"] Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.144234 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fcfe522-4dc0-41a6-b29d-75f00142585e" path="/var/lib/kubelet/pods/4fcfe522-4dc0-41a6-b29d-75f00142585e/volumes" Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.144816 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c70d5c8e-2545-45f9-ba1a-f4f1755f3729" path="/var/lib/kubelet/pods/c70d5c8e-2545-45f9-ba1a-f4f1755f3729/volumes" Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.145335 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc4a31f9-083f-49bf-866a-b6b970910e4d" path="/var/lib/kubelet/pods/fc4a31f9-083f-49bf-866a-b6b970910e4d/volumes" Oct 11 08:07:31 crc kubenswrapper[5016]: I1011 08:07:31.311135 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" event={"ID":"3ed8ab18-67d4-43ad-8722-2add05f17fa6","Type":"ContainerStarted","Data":"b308fdeaf4f23231970787912b61acf678a0a5bb95df0f2c94e277cdedd9636f"} Oct 11 08:07:32 crc kubenswrapper[5016]: I1011 08:07:32.324865 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" event={"ID":"3ed8ab18-67d4-43ad-8722-2add05f17fa6","Type":"ContainerStarted","Data":"e7ff25db7475fd57bfd7766cee560adcac96e8ee824233ad31a7f7da81a5d6b4"} Oct 11 08:07:32 crc kubenswrapper[5016]: I1011 08:07:32.341375 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" podStartSLOduration=1.726127433 podStartE2EDuration="2.34135104s" podCreationTimestamp="2025-10-11 08:07:30 +0000 UTC" firstStartedPulling="2025-10-11 08:07:30.924821744 +0000 UTC m=+1638.825277690" lastFinishedPulling="2025-10-11 08:07:31.540045361 +0000 UTC m=+1639.440501297" observedRunningTime="2025-10-11 08:07:32.337552688 +0000 UTC m=+1640.238008644" watchObservedRunningTime="2025-10-11 08:07:32.34135104 +0000 UTC m=+1640.241807006" Oct 11 08:07:37 crc kubenswrapper[5016]: I1011 08:07:37.038612 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6797-account-create-5xbpq"] Oct 11 08:07:37 crc kubenswrapper[5016]: I1011 08:07:37.052585 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6797-account-create-5xbpq"] Oct 11 08:07:37 crc kubenswrapper[5016]: I1011 08:07:37.146122 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f75907a7-0421-4c6e-8cf9-d196d3c8c0e6" path="/var/lib/kubelet/pods/f75907a7-0421-4c6e-8cf9-d196d3c8c0e6/volumes" Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.049098 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3e52-account-create-5w9qv"] Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.059214 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3e52-account-create-5w9qv"] Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.133738 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:07:38 crc kubenswrapper[5016]: E1011 08:07:38.133985 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.438152 5016 scope.go:117] "RemoveContainer" containerID="150ea039023a37f879adb30746da7e608610712e45a95a0f41dc819e296a338b" Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.459976 5016 scope.go:117] "RemoveContainer" containerID="d64ba490de862b1225ef273b9eb6996c4d19b9b0ff4e9fa0247d7f8bf6064bef" Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.499778 5016 scope.go:117] "RemoveContainer" containerID="c906c1fa96f4259b063c1ce09ea48f3b96193304c561ef6df385533f58ab16dc" Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.536725 5016 scope.go:117] "RemoveContainer" containerID="1dd91bd59bc994e1bc4fb307fe0ff000420760431994b4742de58b81b151ecd6" Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.584085 5016 scope.go:117] "RemoveContainer" containerID="5b9677c74d7b7c186a7b06becf5a127e0868546ab5404944e5f5ead580de00be" Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.643314 5016 scope.go:117] "RemoveContainer" containerID="1e204051cd4f6c35d0f123c72f0c8b312ba7a9350b78d47e0dd1cee367fc8615" Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.700248 5016 scope.go:117] "RemoveContainer" containerID="87be5ce31d3e28f5815f05515ba4b51ae80935358f78bc6fb1e44c922f4c4073" Oct 11 08:07:38 crc kubenswrapper[5016]: I1011 08:07:38.725066 5016 scope.go:117] "RemoveContainer" containerID="d33bcf416d8a2b76afec069d36f43af003a1db2d506acc376e0faa41d66dc44a" Oct 11 08:07:39 crc kubenswrapper[5016]: I1011 08:07:39.146038 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83ef869-3d57-4f37-aba9-d279183b0413" path="/var/lib/kubelet/pods/a83ef869-3d57-4f37-aba9-d279183b0413/volumes" Oct 11 08:07:51 crc kubenswrapper[5016]: I1011 08:07:51.030243 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5844-account-create-ln2pj"] Oct 11 08:07:51 crc kubenswrapper[5016]: I1011 08:07:51.036793 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5844-account-create-ln2pj"] Oct 11 08:07:51 crc kubenswrapper[5016]: I1011 08:07:51.175608 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a1cba4-e094-4311-94b0-f18b957124f0" path="/var/lib/kubelet/pods/c4a1cba4-e094-4311-94b0-f18b957124f0/volumes" Oct 11 08:07:53 crc kubenswrapper[5016]: I1011 08:07:53.144829 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:07:53 crc kubenswrapper[5016]: E1011 08:07:53.145582 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:08:05 crc kubenswrapper[5016]: I1011 08:08:05.058130 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mzxx"] Oct 11 08:08:05 crc kubenswrapper[5016]: I1011 08:08:05.071797 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-2mzxx"] Oct 11 08:08:05 crc kubenswrapper[5016]: I1011 08:08:05.146346 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0b12ea-b33a-4421-bdd6-3bbbb2fca659" path="/var/lib/kubelet/pods/cd0b12ea-b33a-4421-bdd6-3bbbb2fca659/volumes" Oct 11 08:08:07 crc kubenswrapper[5016]: I1011 08:08:07.133435 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:08:07 crc kubenswrapper[5016]: E1011 08:08:07.134080 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:08:19 crc kubenswrapper[5016]: I1011 08:08:19.133461 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:08:19 crc kubenswrapper[5016]: E1011 08:08:19.134299 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:08:22 crc kubenswrapper[5016]: I1011 08:08:22.802907 5016 generic.go:334] "Generic (PLEG): container finished" podID="3ed8ab18-67d4-43ad-8722-2add05f17fa6" containerID="e7ff25db7475fd57bfd7766cee560adcac96e8ee824233ad31a7f7da81a5d6b4" exitCode=0 Oct 11 08:08:22 crc kubenswrapper[5016]: I1011 08:08:22.802993 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" event={"ID":"3ed8ab18-67d4-43ad-8722-2add05f17fa6","Type":"ContainerDied","Data":"e7ff25db7475fd57bfd7766cee560adcac96e8ee824233ad31a7f7da81a5d6b4"} Oct 11 08:08:23 crc kubenswrapper[5016]: I1011 08:08:23.029062 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-s2g6b"] Oct 11 08:08:23 crc kubenswrapper[5016]: I1011 08:08:23.038058 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-s2g6b"] Oct 11 08:08:23 crc kubenswrapper[5016]: I1011 08:08:23.153639 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59bbbd97-5192-4abe-bbe4-2a532e02a4e3" path="/var/lib/kubelet/pods/59bbbd97-5192-4abe-bbe4-2a532e02a4e3/volumes" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.221588 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.319933 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpgsw\" (UniqueName: \"kubernetes.io/projected/3ed8ab18-67d4-43ad-8722-2add05f17fa6-kube-api-access-gpgsw\") pod \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.320126 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-ssh-key\") pod \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.320190 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-inventory\") pod \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\" (UID: \"3ed8ab18-67d4-43ad-8722-2add05f17fa6\") " Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.325342 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed8ab18-67d4-43ad-8722-2add05f17fa6-kube-api-access-gpgsw" (OuterVolumeSpecName: "kube-api-access-gpgsw") pod "3ed8ab18-67d4-43ad-8722-2add05f17fa6" (UID: "3ed8ab18-67d4-43ad-8722-2add05f17fa6"). InnerVolumeSpecName "kube-api-access-gpgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.346585 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-inventory" (OuterVolumeSpecName: "inventory") pod "3ed8ab18-67d4-43ad-8722-2add05f17fa6" (UID: "3ed8ab18-67d4-43ad-8722-2add05f17fa6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.350035 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3ed8ab18-67d4-43ad-8722-2add05f17fa6" (UID: "3ed8ab18-67d4-43ad-8722-2add05f17fa6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.422897 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.422991 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ed8ab18-67d4-43ad-8722-2add05f17fa6-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.423014 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpgsw\" (UniqueName: \"kubernetes.io/projected/3ed8ab18-67d4-43ad-8722-2add05f17fa6-kube-api-access-gpgsw\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.824462 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" event={"ID":"3ed8ab18-67d4-43ad-8722-2add05f17fa6","Type":"ContainerDied","Data":"b308fdeaf4f23231970787912b61acf678a0a5bb95df0f2c94e277cdedd9636f"} Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.824515 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b308fdeaf4f23231970787912b61acf678a0a5bb95df0f2c94e277cdedd9636f" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.824594 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.917570 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f4p99"] Oct 11 08:08:24 crc kubenswrapper[5016]: E1011 08:08:24.918144 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed8ab18-67d4-43ad-8722-2add05f17fa6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.918177 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed8ab18-67d4-43ad-8722-2add05f17fa6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.918524 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed8ab18-67d4-43ad-8722-2add05f17fa6" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.919507 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.922240 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.922775 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.923574 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.925828 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:08:24 crc kubenswrapper[5016]: I1011 08:08:24.935397 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f4p99"] Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.033596 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f4p99\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.034029 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f4p99\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.034075 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzr9n\" (UniqueName: \"kubernetes.io/projected/8048adae-f929-4bd1-9c7e-9c0c5172260f-kube-api-access-kzr9n\") pod \"ssh-known-hosts-edpm-deployment-f4p99\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.034253 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jnrs"] Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.040757 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9jnrs"] Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.135639 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f4p99\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.135779 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f4p99\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.135839 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzr9n\" (UniqueName: \"kubernetes.io/projected/8048adae-f929-4bd1-9c7e-9c0c5172260f-kube-api-access-kzr9n\") pod \"ssh-known-hosts-edpm-deployment-f4p99\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.140207 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-f4p99\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.147342 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-f4p99\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.150010 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b43c3a-0890-454c-b2dc-79c2c29d1c3e" path="/var/lib/kubelet/pods/23b43c3a-0890-454c-b2dc-79c2c29d1c3e/volumes" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.151595 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzr9n\" (UniqueName: \"kubernetes.io/projected/8048adae-f929-4bd1-9c7e-9c0c5172260f-kube-api-access-kzr9n\") pod \"ssh-known-hosts-edpm-deployment-f4p99\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.239908 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:25 crc kubenswrapper[5016]: I1011 08:08:25.827697 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f4p99"] Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.071273 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tprgp"] Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.073139 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.085168 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tprgp"] Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.258084 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-catalog-content\") pod \"community-operators-tprgp\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.258607 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7pgd\" (UniqueName: \"kubernetes.io/projected/eefa958a-f8f5-43da-917c-c7b5b368c383-kube-api-access-d7pgd\") pod \"community-operators-tprgp\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.258985 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-utilities\") pod \"community-operators-tprgp\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.361164 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7pgd\" (UniqueName: \"kubernetes.io/projected/eefa958a-f8f5-43da-917c-c7b5b368c383-kube-api-access-d7pgd\") pod \"community-operators-tprgp\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.361287 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-utilities\") pod \"community-operators-tprgp\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.361355 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-catalog-content\") pod \"community-operators-tprgp\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.361816 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-catalog-content\") pod \"community-operators-tprgp\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.361900 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-utilities\") pod \"community-operators-tprgp\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.387596 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7pgd\" (UniqueName: \"kubernetes.io/projected/eefa958a-f8f5-43da-917c-c7b5b368c383-kube-api-access-d7pgd\") pod \"community-operators-tprgp\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.398957 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.844404 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" event={"ID":"8048adae-f929-4bd1-9c7e-9c0c5172260f","Type":"ContainerStarted","Data":"89e736059eae7bd9b1c804e4fd2a78b1dc6e88255cdd107364b0e3270a475719"} Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.844920 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" event={"ID":"8048adae-f929-4bd1-9c7e-9c0c5172260f","Type":"ContainerStarted","Data":"445010f5cdd8b4f14812bb14923473fac63b3965dcb7192e6df058d000c6695b"} Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.870247 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" podStartSLOduration=2.478967695 podStartE2EDuration="2.870224421s" podCreationTimestamp="2025-10-11 08:08:24 +0000 UTC" firstStartedPulling="2025-10-11 08:08:25.837290958 +0000 UTC m=+1693.737746904" lastFinishedPulling="2025-10-11 08:08:26.228547674 +0000 UTC m=+1694.129003630" observedRunningTime="2025-10-11 08:08:26.868259419 +0000 UTC m=+1694.768715555" watchObservedRunningTime="2025-10-11 08:08:26.870224421 +0000 UTC m=+1694.770680377" Oct 11 08:08:26 crc kubenswrapper[5016]: I1011 08:08:26.913970 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tprgp"] Oct 11 08:08:26 crc kubenswrapper[5016]: W1011 08:08:26.920515 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeefa958a_f8f5_43da_917c_c7b5b368c383.slice/crio-c2f5b7ec63df75e621141c33976dd0f23d2c450da926dbc97714fcdcf6c4c979 WatchSource:0}: Error finding container c2f5b7ec63df75e621141c33976dd0f23d2c450da926dbc97714fcdcf6c4c979: Status 404 returned error can't find the container with id c2f5b7ec63df75e621141c33976dd0f23d2c450da926dbc97714fcdcf6c4c979 Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.079731 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9b92t"] Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.081432 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.089928 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b92t"] Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.181002 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-utilities\") pod \"redhat-marketplace-9b92t\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.181051 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-catalog-content\") pod \"redhat-marketplace-9b92t\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.181193 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5vcq\" (UniqueName: \"kubernetes.io/projected/b16ab860-a532-4a5d-81e0-ff86b74eb951-kube-api-access-x5vcq\") pod \"redhat-marketplace-9b92t\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.283304 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5vcq\" (UniqueName: \"kubernetes.io/projected/b16ab860-a532-4a5d-81e0-ff86b74eb951-kube-api-access-x5vcq\") pod \"redhat-marketplace-9b92t\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.283423 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-utilities\") pod \"redhat-marketplace-9b92t\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.283454 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-catalog-content\") pod \"redhat-marketplace-9b92t\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.284002 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-utilities\") pod \"redhat-marketplace-9b92t\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.284370 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-catalog-content\") pod \"redhat-marketplace-9b92t\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.306773 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5vcq\" (UniqueName: \"kubernetes.io/projected/b16ab860-a532-4a5d-81e0-ff86b74eb951-kube-api-access-x5vcq\") pod \"redhat-marketplace-9b92t\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.446885 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.861680 5016 generic.go:334] "Generic (PLEG): container finished" podID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerID="ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3" exitCode=0 Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.863231 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tprgp" event={"ID":"eefa958a-f8f5-43da-917c-c7b5b368c383","Type":"ContainerDied","Data":"ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3"} Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.863384 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tprgp" event={"ID":"eefa958a-f8f5-43da-917c-c7b5b368c383","Type":"ContainerStarted","Data":"c2f5b7ec63df75e621141c33976dd0f23d2c450da926dbc97714fcdcf6c4c979"} Oct 11 08:08:27 crc kubenswrapper[5016]: I1011 08:08:27.877186 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b92t"] Oct 11 08:08:28 crc kubenswrapper[5016]: I1011 08:08:28.876094 5016 generic.go:334] "Generic (PLEG): container finished" podID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerID="e2b0b4636dea8548e9e7c983408162126b8d307c2b86a459ca84d06cc2c7ac92" exitCode=0 Oct 11 08:08:28 crc kubenswrapper[5016]: I1011 08:08:28.876157 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b92t" event={"ID":"b16ab860-a532-4a5d-81e0-ff86b74eb951","Type":"ContainerDied","Data":"e2b0b4636dea8548e9e7c983408162126b8d307c2b86a459ca84d06cc2c7ac92"} Oct 11 08:08:28 crc kubenswrapper[5016]: I1011 08:08:28.876199 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b92t" event={"ID":"b16ab860-a532-4a5d-81e0-ff86b74eb951","Type":"ContainerStarted","Data":"a06de1898e1cd260208b721fba8bd48aca7ea996f10073dc7d7575686e5e9a65"} Oct 11 08:08:29 crc kubenswrapper[5016]: I1011 08:08:29.886540 5016 generic.go:334] "Generic (PLEG): container finished" podID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerID="ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1" exitCode=0 Oct 11 08:08:29 crc kubenswrapper[5016]: I1011 08:08:29.886610 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tprgp" event={"ID":"eefa958a-f8f5-43da-917c-c7b5b368c383","Type":"ContainerDied","Data":"ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1"} Oct 11 08:08:29 crc kubenswrapper[5016]: I1011 08:08:29.890314 5016 generic.go:334] "Generic (PLEG): container finished" podID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerID="5cc0405e93b13f7c8c00a3ced96cffbb857912db04d9dceb0877e502401617b2" exitCode=0 Oct 11 08:08:29 crc kubenswrapper[5016]: I1011 08:08:29.890359 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b92t" event={"ID":"b16ab860-a532-4a5d-81e0-ff86b74eb951","Type":"ContainerDied","Data":"5cc0405e93b13f7c8c00a3ced96cffbb857912db04d9dceb0877e502401617b2"} Oct 11 08:08:30 crc kubenswrapper[5016]: I1011 08:08:30.903628 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tprgp" event={"ID":"eefa958a-f8f5-43da-917c-c7b5b368c383","Type":"ContainerStarted","Data":"e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1"} Oct 11 08:08:30 crc kubenswrapper[5016]: I1011 08:08:30.906767 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b92t" event={"ID":"b16ab860-a532-4a5d-81e0-ff86b74eb951","Type":"ContainerStarted","Data":"158763ec1a6992bb2ab444375812ba3a85dad21871f832b51b70e54222d33204"} Oct 11 08:08:30 crc kubenswrapper[5016]: I1011 08:08:30.930723 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tprgp" podStartSLOduration=2.39515718 podStartE2EDuration="4.930700125s" podCreationTimestamp="2025-10-11 08:08:26 +0000 UTC" firstStartedPulling="2025-10-11 08:08:27.867463869 +0000 UTC m=+1695.767919815" lastFinishedPulling="2025-10-11 08:08:30.403006814 +0000 UTC m=+1698.303462760" observedRunningTime="2025-10-11 08:08:30.920537334 +0000 UTC m=+1698.820993280" watchObservedRunningTime="2025-10-11 08:08:30.930700125 +0000 UTC m=+1698.831156081" Oct 11 08:08:31 crc kubenswrapper[5016]: I1011 08:08:31.217506 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:08:31 crc kubenswrapper[5016]: E1011 08:08:31.217715 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:08:34 crc kubenswrapper[5016]: I1011 08:08:34.945644 5016 generic.go:334] "Generic (PLEG): container finished" podID="8048adae-f929-4bd1-9c7e-9c0c5172260f" containerID="89e736059eae7bd9b1c804e4fd2a78b1dc6e88255cdd107364b0e3270a475719" exitCode=0 Oct 11 08:08:34 crc kubenswrapper[5016]: I1011 08:08:34.945939 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" event={"ID":"8048adae-f929-4bd1-9c7e-9c0c5172260f","Type":"ContainerDied","Data":"89e736059eae7bd9b1c804e4fd2a78b1dc6e88255cdd107364b0e3270a475719"} Oct 11 08:08:34 crc kubenswrapper[5016]: I1011 08:08:34.972200 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9b92t" podStartSLOduration=6.567545849 podStartE2EDuration="7.972180443s" podCreationTimestamp="2025-10-11 08:08:27 +0000 UTC" firstStartedPulling="2025-10-11 08:08:28.880797878 +0000 UTC m=+1696.781253834" lastFinishedPulling="2025-10-11 08:08:30.285432482 +0000 UTC m=+1698.185888428" observedRunningTime="2025-10-11 08:08:30.950069483 +0000 UTC m=+1698.850525429" watchObservedRunningTime="2025-10-11 08:08:34.972180443 +0000 UTC m=+1702.872636409" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.399923 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.400055 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.468940 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.501428 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.621763 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-inventory-0\") pod \"8048adae-f929-4bd1-9c7e-9c0c5172260f\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.621831 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-ssh-key-openstack-edpm-ipam\") pod \"8048adae-f929-4bd1-9c7e-9c0c5172260f\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.621869 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzr9n\" (UniqueName: \"kubernetes.io/projected/8048adae-f929-4bd1-9c7e-9c0c5172260f-kube-api-access-kzr9n\") pod \"8048adae-f929-4bd1-9c7e-9c0c5172260f\" (UID: \"8048adae-f929-4bd1-9c7e-9c0c5172260f\") " Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.628410 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8048adae-f929-4bd1-9c7e-9c0c5172260f-kube-api-access-kzr9n" (OuterVolumeSpecName: "kube-api-access-kzr9n") pod "8048adae-f929-4bd1-9c7e-9c0c5172260f" (UID: "8048adae-f929-4bd1-9c7e-9c0c5172260f"). InnerVolumeSpecName "kube-api-access-kzr9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.653609 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8048adae-f929-4bd1-9c7e-9c0c5172260f" (UID: "8048adae-f929-4bd1-9c7e-9c0c5172260f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.655813 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8048adae-f929-4bd1-9c7e-9c0c5172260f" (UID: "8048adae-f929-4bd1-9c7e-9c0c5172260f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.723978 5016 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.724025 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8048adae-f929-4bd1-9c7e-9c0c5172260f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.724036 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzr9n\" (UniqueName: \"kubernetes.io/projected/8048adae-f929-4bd1-9c7e-9c0c5172260f-kube-api-access-kzr9n\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.970232 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" event={"ID":"8048adae-f929-4bd1-9c7e-9c0c5172260f","Type":"ContainerDied","Data":"445010f5cdd8b4f14812bb14923473fac63b3965dcb7192e6df058d000c6695b"} Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.970311 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="445010f5cdd8b4f14812bb14923473fac63b3965dcb7192e6df058d000c6695b" Oct 11 08:08:36 crc kubenswrapper[5016]: I1011 08:08:36.970325 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-f4p99" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.068947 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.077950 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6"] Oct 11 08:08:37 crc kubenswrapper[5016]: E1011 08:08:37.078620 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8048adae-f929-4bd1-9c7e-9c0c5172260f" containerName="ssh-known-hosts-edpm-deployment" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.078673 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="8048adae-f929-4bd1-9c7e-9c0c5172260f" containerName="ssh-known-hosts-edpm-deployment" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.078964 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="8048adae-f929-4bd1-9c7e-9c0c5172260f" containerName="ssh-known-hosts-edpm-deployment" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.079984 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.084448 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.084542 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.084767 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.087974 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.094954 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6"] Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.151988 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tprgp"] Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.233420 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spgn\" (UniqueName: \"kubernetes.io/projected/f2f282f8-2623-456d-8ff2-326d606ce468-kube-api-access-7spgn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm8k6\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.233542 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm8k6\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.233595 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm8k6\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.334976 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7spgn\" (UniqueName: \"kubernetes.io/projected/f2f282f8-2623-456d-8ff2-326d606ce468-kube-api-access-7spgn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm8k6\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.335065 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm8k6\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.335100 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm8k6\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.341309 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm8k6\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.342366 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm8k6\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.354196 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7spgn\" (UniqueName: \"kubernetes.io/projected/f2f282f8-2623-456d-8ff2-326d606ce468-kube-api-access-7spgn\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qm8k6\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.409014 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.448544 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.448585 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.507764 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:37 crc kubenswrapper[5016]: I1011 08:08:37.968089 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6"] Oct 11 08:08:38 crc kubenswrapper[5016]: I1011 08:08:38.032167 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:38 crc kubenswrapper[5016]: I1011 08:08:38.895255 5016 scope.go:117] "RemoveContainer" containerID="f870a55d476bfe362ca525b987a4d3406cba4daa6a9c55382d5ec124e28cba7c" Oct 11 08:08:38 crc kubenswrapper[5016]: I1011 08:08:38.959263 5016 scope.go:117] "RemoveContainer" containerID="e2974999130b71d3392ad2c3365d847098bf0c0773bc18d93f74c30604011c95" Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.008716 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" event={"ID":"f2f282f8-2623-456d-8ff2-326d606ce468","Type":"ContainerStarted","Data":"b9263ffa1926c7f528b8dda573f796fdc79e1891cf76b6e4ac79bf711f081bfa"} Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.008769 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" event={"ID":"f2f282f8-2623-456d-8ff2-326d606ce468","Type":"ContainerStarted","Data":"e545d7d42a92dfea591dba453c08590219a366b164ed4a2124cd8e107b461f7e"} Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.010755 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tprgp" podUID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerName="registry-server" containerID="cri-o://e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1" gracePeriod=2 Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.031684 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" podStartSLOduration=1.5223340300000001 podStartE2EDuration="2.03164368s" podCreationTimestamp="2025-10-11 08:08:37 +0000 UTC" firstStartedPulling="2025-10-11 08:08:37.978946139 +0000 UTC m=+1705.879402095" lastFinishedPulling="2025-10-11 08:08:38.488255759 +0000 UTC m=+1706.388711745" observedRunningTime="2025-10-11 08:08:39.03087975 +0000 UTC m=+1706.931335706" watchObservedRunningTime="2025-10-11 08:08:39.03164368 +0000 UTC m=+1706.932099626" Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.053841 5016 scope.go:117] "RemoveContainer" containerID="6a21814cd7da11ca07cd8725db7a8cf3724300c3e85dc780ee8b38a645d4acce" Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.091221 5016 scope.go:117] "RemoveContainer" containerID="9269ccc4c0fbaaff91a9bf30300f9335bf9b34e81ec30991cfd0af010a5dbab9" Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.105990 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b92t"] Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.196119 5016 scope.go:117] "RemoveContainer" containerID="d0a157291a9bc74ce499300a2180049d7206032cee0349251e70730565d18892" Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.447593 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.584444 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7pgd\" (UniqueName: \"kubernetes.io/projected/eefa958a-f8f5-43da-917c-c7b5b368c383-kube-api-access-d7pgd\") pod \"eefa958a-f8f5-43da-917c-c7b5b368c383\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.584815 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-utilities\") pod \"eefa958a-f8f5-43da-917c-c7b5b368c383\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.584871 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-catalog-content\") pod \"eefa958a-f8f5-43da-917c-c7b5b368c383\" (UID: \"eefa958a-f8f5-43da-917c-c7b5b368c383\") " Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.585649 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-utilities" (OuterVolumeSpecName: "utilities") pod "eefa958a-f8f5-43da-917c-c7b5b368c383" (UID: "eefa958a-f8f5-43da-917c-c7b5b368c383"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.590648 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefa958a-f8f5-43da-917c-c7b5b368c383-kube-api-access-d7pgd" (OuterVolumeSpecName: "kube-api-access-d7pgd") pod "eefa958a-f8f5-43da-917c-c7b5b368c383" (UID: "eefa958a-f8f5-43da-917c-c7b5b368c383"). InnerVolumeSpecName "kube-api-access-d7pgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.686422 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7pgd\" (UniqueName: \"kubernetes.io/projected/eefa958a-f8f5-43da-917c-c7b5b368c383-kube-api-access-d7pgd\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:39 crc kubenswrapper[5016]: I1011 08:08:39.686457 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.045882 5016 generic.go:334] "Generic (PLEG): container finished" podID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerID="e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1" exitCode=0 Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.046878 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tprgp" event={"ID":"eefa958a-f8f5-43da-917c-c7b5b368c383","Type":"ContainerDied","Data":"e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1"} Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.046952 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tprgp" event={"ID":"eefa958a-f8f5-43da-917c-c7b5b368c383","Type":"ContainerDied","Data":"c2f5b7ec63df75e621141c33976dd0f23d2c450da926dbc97714fcdcf6c4c979"} Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.046979 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tprgp" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.047007 5016 scope.go:117] "RemoveContainer" containerID="e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.047131 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9b92t" podUID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerName="registry-server" containerID="cri-o://158763ec1a6992bb2ab444375812ba3a85dad21871f832b51b70e54222d33204" gracePeriod=2 Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.078608 5016 scope.go:117] "RemoveContainer" containerID="ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.100783 5016 scope.go:117] "RemoveContainer" containerID="ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.138476 5016 scope.go:117] "RemoveContainer" containerID="e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1" Oct 11 08:08:40 crc kubenswrapper[5016]: E1011 08:08:40.138973 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1\": container with ID starting with e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1 not found: ID does not exist" containerID="e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.139025 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1"} err="failed to get container status \"e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1\": rpc error: code = NotFound desc = could not find container \"e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1\": container with ID starting with e7749788d2d4225133a03c229c0c6ff8d6bb15df39ede3d1d4dc1c81279bbef1 not found: ID does not exist" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.139050 5016 scope.go:117] "RemoveContainer" containerID="ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1" Oct 11 08:08:40 crc kubenswrapper[5016]: E1011 08:08:40.139628 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1\": container with ID starting with ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1 not found: ID does not exist" containerID="ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.139766 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1"} err="failed to get container status \"ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1\": rpc error: code = NotFound desc = could not find container \"ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1\": container with ID starting with ea2426f547ea5fbfefc519023cc83d34f071ebf3b6f6e29230cbe88e5cb994e1 not found: ID does not exist" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.139812 5016 scope.go:117] "RemoveContainer" containerID="ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3" Oct 11 08:08:40 crc kubenswrapper[5016]: E1011 08:08:40.140219 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3\": container with ID starting with ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3 not found: ID does not exist" containerID="ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.140259 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3"} err="failed to get container status \"ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3\": rpc error: code = NotFound desc = could not find container \"ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3\": container with ID starting with ec9a9db76a0b0e28d88d1c4da76658296278cc9bebbc133e806d2a21e19700b3 not found: ID does not exist" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.406609 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eefa958a-f8f5-43da-917c-c7b5b368c383" (UID: "eefa958a-f8f5-43da-917c-c7b5b368c383"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.505337 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eefa958a-f8f5-43da-917c-c7b5b368c383-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.686062 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tprgp"] Oct 11 08:08:40 crc kubenswrapper[5016]: I1011 08:08:40.696014 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tprgp"] Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.060395 5016 generic.go:334] "Generic (PLEG): container finished" podID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerID="158763ec1a6992bb2ab444375812ba3a85dad21871f832b51b70e54222d33204" exitCode=0 Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.060498 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b92t" event={"ID":"b16ab860-a532-4a5d-81e0-ff86b74eb951","Type":"ContainerDied","Data":"158763ec1a6992bb2ab444375812ba3a85dad21871f832b51b70e54222d33204"} Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.060559 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b92t" event={"ID":"b16ab860-a532-4a5d-81e0-ff86b74eb951","Type":"ContainerDied","Data":"a06de1898e1cd260208b721fba8bd48aca7ea996f10073dc7d7575686e5e9a65"} Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.060586 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a06de1898e1cd260208b721fba8bd48aca7ea996f10073dc7d7575686e5e9a65" Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.142433 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.147748 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eefa958a-f8f5-43da-917c-c7b5b368c383" path="/var/lib/kubelet/pods/eefa958a-f8f5-43da-917c-c7b5b368c383/volumes" Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.220895 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-catalog-content\") pod \"b16ab860-a532-4a5d-81e0-ff86b74eb951\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.220966 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5vcq\" (UniqueName: \"kubernetes.io/projected/b16ab860-a532-4a5d-81e0-ff86b74eb951-kube-api-access-x5vcq\") pod \"b16ab860-a532-4a5d-81e0-ff86b74eb951\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.221129 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-utilities\") pod \"b16ab860-a532-4a5d-81e0-ff86b74eb951\" (UID: \"b16ab860-a532-4a5d-81e0-ff86b74eb951\") " Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.223706 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-utilities" (OuterVolumeSpecName: "utilities") pod "b16ab860-a532-4a5d-81e0-ff86b74eb951" (UID: "b16ab860-a532-4a5d-81e0-ff86b74eb951"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.227449 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16ab860-a532-4a5d-81e0-ff86b74eb951-kube-api-access-x5vcq" (OuterVolumeSpecName: "kube-api-access-x5vcq") pod "b16ab860-a532-4a5d-81e0-ff86b74eb951" (UID: "b16ab860-a532-4a5d-81e0-ff86b74eb951"). InnerVolumeSpecName "kube-api-access-x5vcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.235105 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b16ab860-a532-4a5d-81e0-ff86b74eb951" (UID: "b16ab860-a532-4a5d-81e0-ff86b74eb951"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.323900 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.323931 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b16ab860-a532-4a5d-81e0-ff86b74eb951-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:41 crc kubenswrapper[5016]: I1011 08:08:41.323942 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5vcq\" (UniqueName: \"kubernetes.io/projected/b16ab860-a532-4a5d-81e0-ff86b74eb951-kube-api-access-x5vcq\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:42 crc kubenswrapper[5016]: I1011 08:08:42.067954 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b92t" Oct 11 08:08:42 crc kubenswrapper[5016]: I1011 08:08:42.105863 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b92t"] Oct 11 08:08:42 crc kubenswrapper[5016]: I1011 08:08:42.112695 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b92t"] Oct 11 08:08:43 crc kubenswrapper[5016]: I1011 08:08:43.154288 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16ab860-a532-4a5d-81e0-ff86b74eb951" path="/var/lib/kubelet/pods/b16ab860-a532-4a5d-81e0-ff86b74eb951/volumes" Oct 11 08:08:45 crc kubenswrapper[5016]: I1011 08:08:45.133607 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:08:45 crc kubenswrapper[5016]: E1011 08:08:45.134174 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:08:49 crc kubenswrapper[5016]: I1011 08:08:49.149390 5016 generic.go:334] "Generic (PLEG): container finished" podID="f2f282f8-2623-456d-8ff2-326d606ce468" containerID="b9263ffa1926c7f528b8dda573f796fdc79e1891cf76b6e4ac79bf711f081bfa" exitCode=0 Oct 11 08:08:49 crc kubenswrapper[5016]: I1011 08:08:49.151553 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" event={"ID":"f2f282f8-2623-456d-8ff2-326d606ce468","Type":"ContainerDied","Data":"b9263ffa1926c7f528b8dda573f796fdc79e1891cf76b6e4ac79bf711f081bfa"} Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.522381 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.595846 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-inventory\") pod \"f2f282f8-2623-456d-8ff2-326d606ce468\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.595982 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7spgn\" (UniqueName: \"kubernetes.io/projected/f2f282f8-2623-456d-8ff2-326d606ce468-kube-api-access-7spgn\") pod \"f2f282f8-2623-456d-8ff2-326d606ce468\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.596092 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-ssh-key\") pod \"f2f282f8-2623-456d-8ff2-326d606ce468\" (UID: \"f2f282f8-2623-456d-8ff2-326d606ce468\") " Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.601958 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2f282f8-2623-456d-8ff2-326d606ce468-kube-api-access-7spgn" (OuterVolumeSpecName: "kube-api-access-7spgn") pod "f2f282f8-2623-456d-8ff2-326d606ce468" (UID: "f2f282f8-2623-456d-8ff2-326d606ce468"). InnerVolumeSpecName "kube-api-access-7spgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.622243 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f2f282f8-2623-456d-8ff2-326d606ce468" (UID: "f2f282f8-2623-456d-8ff2-326d606ce468"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.626748 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-inventory" (OuterVolumeSpecName: "inventory") pod "f2f282f8-2623-456d-8ff2-326d606ce468" (UID: "f2f282f8-2623-456d-8ff2-326d606ce468"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.697884 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.697916 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7spgn\" (UniqueName: \"kubernetes.io/projected/f2f282f8-2623-456d-8ff2-326d606ce468-kube-api-access-7spgn\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:50 crc kubenswrapper[5016]: I1011 08:08:50.697929 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2f282f8-2623-456d-8ff2-326d606ce468-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.164371 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" event={"ID":"f2f282f8-2623-456d-8ff2-326d606ce468","Type":"ContainerDied","Data":"e545d7d42a92dfea591dba453c08590219a366b164ed4a2124cd8e107b461f7e"} Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.164416 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e545d7d42a92dfea591dba453c08590219a366b164ed4a2124cd8e107b461f7e" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.164425 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.232288 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd"] Oct 11 08:08:51 crc kubenswrapper[5016]: E1011 08:08:51.232724 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerName="registry-server" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.232742 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerName="registry-server" Oct 11 08:08:51 crc kubenswrapper[5016]: E1011 08:08:51.232761 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerName="extract-utilities" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.232769 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerName="extract-utilities" Oct 11 08:08:51 crc kubenswrapper[5016]: E1011 08:08:51.232782 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerName="registry-server" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.232789 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerName="registry-server" Oct 11 08:08:51 crc kubenswrapper[5016]: E1011 08:08:51.232801 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2f282f8-2623-456d-8ff2-326d606ce468" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.232808 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2f282f8-2623-456d-8ff2-326d606ce468" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:08:51 crc kubenswrapper[5016]: E1011 08:08:51.232817 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerName="extract-utilities" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.232823 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerName="extract-utilities" Oct 11 08:08:51 crc kubenswrapper[5016]: E1011 08:08:51.232839 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerName="extract-content" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.232846 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerName="extract-content" Oct 11 08:08:51 crc kubenswrapper[5016]: E1011 08:08:51.232856 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerName="extract-content" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.232861 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerName="extract-content" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.233086 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="b16ab860-a532-4a5d-81e0-ff86b74eb951" containerName="registry-server" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.233097 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="eefa958a-f8f5-43da-917c-c7b5b368c383" containerName="registry-server" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.233107 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2f282f8-2623-456d-8ff2-326d606ce468" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.233982 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.236423 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.239930 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.240000 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.240162 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.256314 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd"] Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.310327 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.310467 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psn26\" (UniqueName: \"kubernetes.io/projected/78258942-bb68-433a-9cda-cfb2f293a9a3-kube-api-access-psn26\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.310541 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.412124 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.412438 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psn26\" (UniqueName: \"kubernetes.io/projected/78258942-bb68-433a-9cda-cfb2f293a9a3-kube-api-access-psn26\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.413484 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.416305 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.425230 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.440978 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psn26\" (UniqueName: \"kubernetes.io/projected/78258942-bb68-433a-9cda-cfb2f293a9a3-kube-api-access-psn26\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:51 crc kubenswrapper[5016]: I1011 08:08:51.555813 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:08:52 crc kubenswrapper[5016]: I1011 08:08:52.127058 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd"] Oct 11 08:08:52 crc kubenswrapper[5016]: I1011 08:08:52.178521 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" event={"ID":"78258942-bb68-433a-9cda-cfb2f293a9a3","Type":"ContainerStarted","Data":"d2c2a72332a462728520173defc68b5cac639da74961494bc6d2bcd9c9d171c3"} Oct 11 08:08:53 crc kubenswrapper[5016]: I1011 08:08:53.190820 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" event={"ID":"78258942-bb68-433a-9cda-cfb2f293a9a3","Type":"ContainerStarted","Data":"49acf16ebe07ce43cd06d60190763591dd2f413ebc52bf1f1cce44684493a7b0"} Oct 11 08:08:53 crc kubenswrapper[5016]: I1011 08:08:53.205376 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" podStartSLOduration=1.52950558 podStartE2EDuration="2.20534738s" podCreationTimestamp="2025-10-11 08:08:51 +0000 UTC" firstStartedPulling="2025-10-11 08:08:52.131970357 +0000 UTC m=+1720.032426313" lastFinishedPulling="2025-10-11 08:08:52.807812167 +0000 UTC m=+1720.708268113" observedRunningTime="2025-10-11 08:08:53.204643072 +0000 UTC m=+1721.105099038" watchObservedRunningTime="2025-10-11 08:08:53.20534738 +0000 UTC m=+1721.105803316" Oct 11 08:08:56 crc kubenswrapper[5016]: I1011 08:08:56.178603 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:08:56 crc kubenswrapper[5016]: E1011 08:08:56.179424 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:09:03 crc kubenswrapper[5016]: I1011 08:09:03.327066 5016 generic.go:334] "Generic (PLEG): container finished" podID="78258942-bb68-433a-9cda-cfb2f293a9a3" containerID="49acf16ebe07ce43cd06d60190763591dd2f413ebc52bf1f1cce44684493a7b0" exitCode=0 Oct 11 08:09:03 crc kubenswrapper[5016]: I1011 08:09:03.327161 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" event={"ID":"78258942-bb68-433a-9cda-cfb2f293a9a3","Type":"ContainerDied","Data":"49acf16ebe07ce43cd06d60190763591dd2f413ebc52bf1f1cce44684493a7b0"} Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.835348 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.880568 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psn26\" (UniqueName: \"kubernetes.io/projected/78258942-bb68-433a-9cda-cfb2f293a9a3-kube-api-access-psn26\") pod \"78258942-bb68-433a-9cda-cfb2f293a9a3\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.880899 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-inventory\") pod \"78258942-bb68-433a-9cda-cfb2f293a9a3\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.880943 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-ssh-key\") pod \"78258942-bb68-433a-9cda-cfb2f293a9a3\" (UID: \"78258942-bb68-433a-9cda-cfb2f293a9a3\") " Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.887042 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78258942-bb68-433a-9cda-cfb2f293a9a3-kube-api-access-psn26" (OuterVolumeSpecName: "kube-api-access-psn26") pod "78258942-bb68-433a-9cda-cfb2f293a9a3" (UID: "78258942-bb68-433a-9cda-cfb2f293a9a3"). InnerVolumeSpecName "kube-api-access-psn26". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.908675 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "78258942-bb68-433a-9cda-cfb2f293a9a3" (UID: "78258942-bb68-433a-9cda-cfb2f293a9a3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.922331 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-inventory" (OuterVolumeSpecName: "inventory") pod "78258942-bb68-433a-9cda-cfb2f293a9a3" (UID: "78258942-bb68-433a-9cda-cfb2f293a9a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.983080 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psn26\" (UniqueName: \"kubernetes.io/projected/78258942-bb68-433a-9cda-cfb2f293a9a3-kube-api-access-psn26\") on node \"crc\" DevicePath \"\"" Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.983118 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:09:04 crc kubenswrapper[5016]: I1011 08:09:04.983129 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/78258942-bb68-433a-9cda-cfb2f293a9a3-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:09:05 crc kubenswrapper[5016]: I1011 08:09:05.363008 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" event={"ID":"78258942-bb68-433a-9cda-cfb2f293a9a3","Type":"ContainerDied","Data":"d2c2a72332a462728520173defc68b5cac639da74961494bc6d2bcd9c9d171c3"} Oct 11 08:09:05 crc kubenswrapper[5016]: I1011 08:09:05.363067 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2c2a72332a462728520173defc68b5cac639da74961494bc6d2bcd9c9d171c3" Oct 11 08:09:05 crc kubenswrapper[5016]: I1011 08:09:05.363174 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd" Oct 11 08:09:07 crc kubenswrapper[5016]: I1011 08:09:07.047369 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tgcng"] Oct 11 08:09:07 crc kubenswrapper[5016]: I1011 08:09:07.057476 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tgcng"] Oct 11 08:09:07 crc kubenswrapper[5016]: I1011 08:09:07.133894 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:09:07 crc kubenswrapper[5016]: E1011 08:09:07.134220 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:09:07 crc kubenswrapper[5016]: I1011 08:09:07.151611 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee09e0c-61a1-446f-b2d8-d74cd60e3152" path="/var/lib/kubelet/pods/2ee09e0c-61a1-446f-b2d8-d74cd60e3152/volumes" Oct 11 08:09:20 crc kubenswrapper[5016]: I1011 08:09:20.134286 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:09:20 crc kubenswrapper[5016]: E1011 08:09:20.135339 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:09:34 crc kubenswrapper[5016]: I1011 08:09:34.133098 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:09:34 crc kubenswrapper[5016]: E1011 08:09:34.134099 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:09:39 crc kubenswrapper[5016]: I1011 08:09:39.413529 5016 scope.go:117] "RemoveContainer" containerID="53bd2136c9cb244abc20f388e52414168c8c2b1afe6e5cc4ae9ee7e376b011cf" Oct 11 08:09:47 crc kubenswrapper[5016]: I1011 08:09:47.134599 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:09:47 crc kubenswrapper[5016]: I1011 08:09:47.776924 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"59069795b31b315c02b6409215974f97ef50f882460c0e4d8076522e3cbf39b6"} Oct 11 08:12:07 crc kubenswrapper[5016]: I1011 08:12:07.121833 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:12:07 crc kubenswrapper[5016]: I1011 08:12:07.123287 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:12:37 crc kubenswrapper[5016]: I1011 08:12:37.122149 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:12:37 crc kubenswrapper[5016]: I1011 08:12:37.122698 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:13:07 crc kubenswrapper[5016]: I1011 08:13:07.122779 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:13:07 crc kubenswrapper[5016]: I1011 08:13:07.123427 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:13:07 crc kubenswrapper[5016]: I1011 08:13:07.123494 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:13:07 crc kubenswrapper[5016]: I1011 08:13:07.124562 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"59069795b31b315c02b6409215974f97ef50f882460c0e4d8076522e3cbf39b6"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:13:07 crc kubenswrapper[5016]: I1011 08:13:07.124686 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://59069795b31b315c02b6409215974f97ef50f882460c0e4d8076522e3cbf39b6" gracePeriod=600 Oct 11 08:13:07 crc kubenswrapper[5016]: I1011 08:13:07.735521 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="59069795b31b315c02b6409215974f97ef50f882460c0e4d8076522e3cbf39b6" exitCode=0 Oct 11 08:13:07 crc kubenswrapper[5016]: I1011 08:13:07.735625 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"59069795b31b315c02b6409215974f97ef50f882460c0e4d8076522e3cbf39b6"} Oct 11 08:13:07 crc kubenswrapper[5016]: I1011 08:13:07.735763 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65"} Oct 11 08:13:07 crc kubenswrapper[5016]: I1011 08:13:07.735824 5016 scope.go:117] "RemoveContainer" containerID="10ac10fa34cb615e61edb72dcfb138683f07593f5ff199dbe0731c102689d7a9" Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.391771 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.419043 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.426721 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.438154 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.447774 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.454955 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.461471 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.466307 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wmd6j"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.471696 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mnjvr"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.477039 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rqlpq"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.482124 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.487062 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qm8k6"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.492072 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-pmzhj"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.497274 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f4p99"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.502251 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-r9dlq"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.507348 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.512687 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r64rd"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.521572 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.528490 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-stzhz"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.535190 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2bt88"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.542773 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-f4p99"] Oct 11 08:14:21 crc kubenswrapper[5016]: I1011 08:14:21.550589 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5r4mx"] Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.155126 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f63714-b15e-4c15-be93-c28413f234ff" path="/var/lib/kubelet/pods/09f63714-b15e-4c15-be93-c28413f234ff/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.156805 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13561672-ecec-49ba-8618-8f7a3fcddb7e" path="/var/lib/kubelet/pods/13561672-ecec-49ba-8618-8f7a3fcddb7e/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.157892 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af6861b-140d-4431-98e7-c47b7d4c9a3d" path="/var/lib/kubelet/pods/1af6861b-140d-4431-98e7-c47b7d4c9a3d/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.159051 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed8ab18-67d4-43ad-8722-2add05f17fa6" path="/var/lib/kubelet/pods/3ed8ab18-67d4-43ad-8722-2add05f17fa6/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.161753 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78258942-bb68-433a-9cda-cfb2f293a9a3" path="/var/lib/kubelet/pods/78258942-bb68-433a-9cda-cfb2f293a9a3/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.163062 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7855134b-9e53-43a5-ac30-b63db80d9231" path="/var/lib/kubelet/pods/7855134b-9e53-43a5-ac30-b63db80d9231/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.164213 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8048adae-f929-4bd1-9c7e-9c0c5172260f" path="/var/lib/kubelet/pods/8048adae-f929-4bd1-9c7e-9c0c5172260f/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.166537 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901099a9-9b33-4b7a-a393-25c97dff87b6" path="/var/lib/kubelet/pods/901099a9-9b33-4b7a-a393-25c97dff87b6/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.167798 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9136e11e-b30a-4619-82aa-fac539c76b6f" path="/var/lib/kubelet/pods/9136e11e-b30a-4619-82aa-fac539c76b6f/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.169101 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da5ee152-38b6-41b3-8c8c-c1051b5621f5" path="/var/lib/kubelet/pods/da5ee152-38b6-41b3-8c8c-c1051b5621f5/volumes" Oct 11 08:14:23 crc kubenswrapper[5016]: I1011 08:14:23.170495 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2f282f8-2623-456d-8ff2-326d606ce468" path="/var/lib/kubelet/pods/f2f282f8-2623-456d-8ff2-326d606ce468/volumes" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.331545 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc"] Oct 11 08:14:27 crc kubenswrapper[5016]: E1011 08:14:27.332490 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78258942-bb68-433a-9cda-cfb2f293a9a3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.332518 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="78258942-bb68-433a-9cda-cfb2f293a9a3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.332846 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="78258942-bb68-433a-9cda-cfb2f293a9a3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.333890 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.339091 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.343315 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.343323 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.343723 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.348171 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc"] Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.356232 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.409586 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.410071 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzwvd\" (UniqueName: \"kubernetes.io/projected/aba973bc-fbe0-437c-a640-41a201be1735-kube-api-access-bzwvd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.410188 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.410235 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.410266 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.511045 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.511109 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.511142 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.511183 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.511321 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzwvd\" (UniqueName: \"kubernetes.io/projected/aba973bc-fbe0-437c-a640-41a201be1735-kube-api-access-bzwvd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.519619 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.519644 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.521527 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.523518 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.530749 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzwvd\" (UniqueName: \"kubernetes.io/projected/aba973bc-fbe0-437c-a640-41a201be1735-kube-api-access-bzwvd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:27 crc kubenswrapper[5016]: I1011 08:14:27.682859 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:28 crc kubenswrapper[5016]: I1011 08:14:28.072096 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc"] Oct 11 08:14:28 crc kubenswrapper[5016]: I1011 08:14:28.081605 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 08:14:28 crc kubenswrapper[5016]: I1011 08:14:28.648033 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" event={"ID":"aba973bc-fbe0-437c-a640-41a201be1735","Type":"ContainerStarted","Data":"6d3675ab4a7efb3c4e2279ca236489a01b1ba58686516afc5e1c2707626d6ebc"} Oct 11 08:14:29 crc kubenswrapper[5016]: I1011 08:14:29.663493 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" event={"ID":"aba973bc-fbe0-437c-a640-41a201be1735","Type":"ContainerStarted","Data":"194eaa3cfc68abaf522a5481ba2d4e714fba0fa24c9180fd417ab9e8eb5580aa"} Oct 11 08:14:29 crc kubenswrapper[5016]: I1011 08:14:29.706592 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" podStartSLOduration=2.162786116 podStartE2EDuration="2.706558097s" podCreationTimestamp="2025-10-11 08:14:27 +0000 UTC" firstStartedPulling="2025-10-11 08:14:28.081274391 +0000 UTC m=+2055.981730337" lastFinishedPulling="2025-10-11 08:14:28.625046362 +0000 UTC m=+2056.525502318" observedRunningTime="2025-10-11 08:14:29.695777327 +0000 UTC m=+2057.596233303" watchObservedRunningTime="2025-10-11 08:14:29.706558097 +0000 UTC m=+2057.607014053" Oct 11 08:14:39 crc kubenswrapper[5016]: I1011 08:14:39.582208 5016 scope.go:117] "RemoveContainer" containerID="158763ec1a6992bb2ab444375812ba3a85dad21871f832b51b70e54222d33204" Oct 11 08:14:39 crc kubenswrapper[5016]: I1011 08:14:39.632219 5016 scope.go:117] "RemoveContainer" containerID="b9263ffa1926c7f528b8dda573f796fdc79e1891cf76b6e4ac79bf711f081bfa" Oct 11 08:14:39 crc kubenswrapper[5016]: I1011 08:14:39.699383 5016 scope.go:117] "RemoveContainer" containerID="06df628d62c66d31a62667935d049e242739dee18059b3a6dd107b701600be1d" Oct 11 08:14:39 crc kubenswrapper[5016]: I1011 08:14:39.826010 5016 scope.go:117] "RemoveContainer" containerID="e2b0b4636dea8548e9e7c983408162126b8d307c2b86a459ca84d06cc2c7ac92" Oct 11 08:14:39 crc kubenswrapper[5016]: I1011 08:14:39.850142 5016 scope.go:117] "RemoveContainer" containerID="06db77aa2e463ea5df94c514d3ff0f0090ccbf2ece9bae922c0a46f9b1fe730e" Oct 11 08:14:39 crc kubenswrapper[5016]: I1011 08:14:39.933715 5016 scope.go:117] "RemoveContainer" containerID="e79ecb022f4e4871aad5b3263de7e7f0c8baa897b9f31b7851a0a81841cf9073" Oct 11 08:14:39 crc kubenswrapper[5016]: I1011 08:14:39.993187 5016 scope.go:117] "RemoveContainer" containerID="d12658263a65ff9efe55ec757e654d1c873018ea9014003e416aefecc4703b9d" Oct 11 08:14:40 crc kubenswrapper[5016]: I1011 08:14:40.044162 5016 scope.go:117] "RemoveContainer" containerID="813956b703d2027fd7a6771cf0819631bf908105d43210daafb51c1b692ef1fa" Oct 11 08:14:40 crc kubenswrapper[5016]: I1011 08:14:40.101907 5016 scope.go:117] "RemoveContainer" containerID="2120ad966597ebb2bb54c41d8407a44dd89fb77c469e4d21ab4019e5dadf05ff" Oct 11 08:14:40 crc kubenswrapper[5016]: I1011 08:14:40.151561 5016 scope.go:117] "RemoveContainer" containerID="5cc0405e93b13f7c8c00a3ced96cffbb857912db04d9dceb0877e502401617b2" Oct 11 08:14:40 crc kubenswrapper[5016]: I1011 08:14:40.185535 5016 scope.go:117] "RemoveContainer" containerID="89e736059eae7bd9b1c804e4fd2a78b1dc6e88255cdd107364b0e3270a475719" Oct 11 08:14:40 crc kubenswrapper[5016]: I1011 08:14:40.226799 5016 scope.go:117] "RemoveContainer" containerID="e7ff25db7475fd57bfd7766cee560adcac96e8ee824233ad31a7f7da81a5d6b4" Oct 11 08:14:40 crc kubenswrapper[5016]: I1011 08:14:40.303041 5016 scope.go:117] "RemoveContainer" containerID="b16f35fd4574311bece34fd97320002f83f216ff65a9373375a757a698e60bdb" Oct 11 08:14:40 crc kubenswrapper[5016]: I1011 08:14:40.833893 5016 generic.go:334] "Generic (PLEG): container finished" podID="aba973bc-fbe0-437c-a640-41a201be1735" containerID="194eaa3cfc68abaf522a5481ba2d4e714fba0fa24c9180fd417ab9e8eb5580aa" exitCode=0 Oct 11 08:14:40 crc kubenswrapper[5016]: I1011 08:14:40.833977 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" event={"ID":"aba973bc-fbe0-437c-a640-41a201be1735","Type":"ContainerDied","Data":"194eaa3cfc68abaf522a5481ba2d4e714fba0fa24c9180fd417ab9e8eb5580aa"} Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.412189 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.513879 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-inventory\") pod \"aba973bc-fbe0-437c-a640-41a201be1735\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.513939 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ceph\") pod \"aba973bc-fbe0-437c-a640-41a201be1735\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.513998 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzwvd\" (UniqueName: \"kubernetes.io/projected/aba973bc-fbe0-437c-a640-41a201be1735-kube-api-access-bzwvd\") pod \"aba973bc-fbe0-437c-a640-41a201be1735\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.514041 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ssh-key\") pod \"aba973bc-fbe0-437c-a640-41a201be1735\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.514110 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-repo-setup-combined-ca-bundle\") pod \"aba973bc-fbe0-437c-a640-41a201be1735\" (UID: \"aba973bc-fbe0-437c-a640-41a201be1735\") " Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.521880 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ceph" (OuterVolumeSpecName: "ceph") pod "aba973bc-fbe0-437c-a640-41a201be1735" (UID: "aba973bc-fbe0-437c-a640-41a201be1735"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.522083 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aba973bc-fbe0-437c-a640-41a201be1735-kube-api-access-bzwvd" (OuterVolumeSpecName: "kube-api-access-bzwvd") pod "aba973bc-fbe0-437c-a640-41a201be1735" (UID: "aba973bc-fbe0-437c-a640-41a201be1735"). InnerVolumeSpecName "kube-api-access-bzwvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.522303 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "aba973bc-fbe0-437c-a640-41a201be1735" (UID: "aba973bc-fbe0-437c-a640-41a201be1735"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.543814 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-inventory" (OuterVolumeSpecName: "inventory") pod "aba973bc-fbe0-437c-a640-41a201be1735" (UID: "aba973bc-fbe0-437c-a640-41a201be1735"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.559553 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "aba973bc-fbe0-437c-a640-41a201be1735" (UID: "aba973bc-fbe0-437c-a640-41a201be1735"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.616218 5016 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.616254 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.616267 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.616934 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzwvd\" (UniqueName: \"kubernetes.io/projected/aba973bc-fbe0-437c-a640-41a201be1735-kube-api-access-bzwvd\") on node \"crc\" DevicePath \"\"" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.616962 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/aba973bc-fbe0-437c-a640-41a201be1735-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.852533 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" event={"ID":"aba973bc-fbe0-437c-a640-41a201be1735","Type":"ContainerDied","Data":"6d3675ab4a7efb3c4e2279ca236489a01b1ba58686516afc5e1c2707626d6ebc"} Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.852573 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d3675ab4a7efb3c4e2279ca236489a01b1ba58686516afc5e1c2707626d6ebc" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.852588 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.964268 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7"] Oct 11 08:14:42 crc kubenswrapper[5016]: E1011 08:14:42.965123 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aba973bc-fbe0-437c-a640-41a201be1735" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.965256 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="aba973bc-fbe0-437c-a640-41a201be1735" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.965632 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="aba973bc-fbe0-437c-a640-41a201be1735" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.966739 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.973490 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7"] Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.977360 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.977407 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.977684 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.978168 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:14:42 crc kubenswrapper[5016]: I1011 08:14:42.978501 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.024639 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.025219 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.025343 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.025519 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.025784 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5pk8\" (UniqueName: \"kubernetes.io/projected/729b92c8-604e-4a61-b146-f0f4dc9d00d5-kube-api-access-k5pk8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.128793 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.129009 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.129825 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.129936 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.130113 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5pk8\" (UniqueName: \"kubernetes.io/projected/729b92c8-604e-4a61-b146-f0f4dc9d00d5-kube-api-access-k5pk8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.134794 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.135403 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.138303 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.140830 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.154004 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5pk8\" (UniqueName: \"kubernetes.io/projected/729b92c8-604e-4a61-b146-f0f4dc9d00d5-kube-api-access-k5pk8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.296779 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:14:43 crc kubenswrapper[5016]: I1011 08:14:43.909143 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7"] Oct 11 08:14:44 crc kubenswrapper[5016]: I1011 08:14:44.871236 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" event={"ID":"729b92c8-604e-4a61-b146-f0f4dc9d00d5","Type":"ContainerStarted","Data":"7e8fe5f00d243c8231b2f59cae54e329d1daba4aa7cd2f37c9bf50d900e1f5b2"} Oct 11 08:14:44 crc kubenswrapper[5016]: I1011 08:14:44.871735 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" event={"ID":"729b92c8-604e-4a61-b146-f0f4dc9d00d5","Type":"ContainerStarted","Data":"37d91abf9b7e5e895df4e1b7a2c5ce78f2571353980a33b68885ceb1a2608824"} Oct 11 08:14:44 crc kubenswrapper[5016]: I1011 08:14:44.892129 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" podStartSLOduration=2.3823422450000002 podStartE2EDuration="2.892105108s" podCreationTimestamp="2025-10-11 08:14:42 +0000 UTC" firstStartedPulling="2025-10-11 08:14:43.919517498 +0000 UTC m=+2071.819973454" lastFinishedPulling="2025-10-11 08:14:44.429280371 +0000 UTC m=+2072.329736317" observedRunningTime="2025-10-11 08:14:44.89105392 +0000 UTC m=+2072.791509866" watchObservedRunningTime="2025-10-11 08:14:44.892105108 +0000 UTC m=+2072.792561064" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.156469 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j"] Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.161253 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.165731 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.165756 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.170989 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j"] Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.256845 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwn6\" (UniqueName: \"kubernetes.io/projected/284d8b10-4a92-482a-b882-5cb28395f892-kube-api-access-cpwn6\") pod \"collect-profiles-29336175-7c25j\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.257127 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284d8b10-4a92-482a-b882-5cb28395f892-config-volume\") pod \"collect-profiles-29336175-7c25j\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.257645 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284d8b10-4a92-482a-b882-5cb28395f892-secret-volume\") pod \"collect-profiles-29336175-7c25j\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.359937 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284d8b10-4a92-482a-b882-5cb28395f892-secret-volume\") pod \"collect-profiles-29336175-7c25j\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.360034 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwn6\" (UniqueName: \"kubernetes.io/projected/284d8b10-4a92-482a-b882-5cb28395f892-kube-api-access-cpwn6\") pod \"collect-profiles-29336175-7c25j\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.360059 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284d8b10-4a92-482a-b882-5cb28395f892-config-volume\") pod \"collect-profiles-29336175-7c25j\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.361087 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284d8b10-4a92-482a-b882-5cb28395f892-config-volume\") pod \"collect-profiles-29336175-7c25j\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.369462 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284d8b10-4a92-482a-b882-5cb28395f892-secret-volume\") pod \"collect-profiles-29336175-7c25j\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.383908 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwn6\" (UniqueName: \"kubernetes.io/projected/284d8b10-4a92-482a-b882-5cb28395f892-kube-api-access-cpwn6\") pod \"collect-profiles-29336175-7c25j\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.500304 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:00 crc kubenswrapper[5016]: I1011 08:15:00.998241 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j"] Oct 11 08:15:01 crc kubenswrapper[5016]: W1011 08:15:01.011870 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod284d8b10_4a92_482a_b882_5cb28395f892.slice/crio-062302cd8c17c8a02e14ceed51bcc596a9d6d5e12176613772ff9bf75f568344 WatchSource:0}: Error finding container 062302cd8c17c8a02e14ceed51bcc596a9d6d5e12176613772ff9bf75f568344: Status 404 returned error can't find the container with id 062302cd8c17c8a02e14ceed51bcc596a9d6d5e12176613772ff9bf75f568344 Oct 11 08:15:01 crc kubenswrapper[5016]: I1011 08:15:01.029561 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" event={"ID":"284d8b10-4a92-482a-b882-5cb28395f892","Type":"ContainerStarted","Data":"062302cd8c17c8a02e14ceed51bcc596a9d6d5e12176613772ff9bf75f568344"} Oct 11 08:15:02 crc kubenswrapper[5016]: I1011 08:15:02.043531 5016 generic.go:334] "Generic (PLEG): container finished" podID="284d8b10-4a92-482a-b882-5cb28395f892" containerID="84bb3de699edd10bc653b220d54bba0083c10cc30c8b5a3ea3cb82e6171473b2" exitCode=0 Oct 11 08:15:02 crc kubenswrapper[5016]: I1011 08:15:02.043612 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" event={"ID":"284d8b10-4a92-482a-b882-5cb28395f892","Type":"ContainerDied","Data":"84bb3de699edd10bc653b220d54bba0083c10cc30c8b5a3ea3cb82e6171473b2"} Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.458163 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.524609 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284d8b10-4a92-482a-b882-5cb28395f892-secret-volume\") pod \"284d8b10-4a92-482a-b882-5cb28395f892\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.524746 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpwn6\" (UniqueName: \"kubernetes.io/projected/284d8b10-4a92-482a-b882-5cb28395f892-kube-api-access-cpwn6\") pod \"284d8b10-4a92-482a-b882-5cb28395f892\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.524979 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284d8b10-4a92-482a-b882-5cb28395f892-config-volume\") pod \"284d8b10-4a92-482a-b882-5cb28395f892\" (UID: \"284d8b10-4a92-482a-b882-5cb28395f892\") " Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.525700 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284d8b10-4a92-482a-b882-5cb28395f892-config-volume" (OuterVolumeSpecName: "config-volume") pod "284d8b10-4a92-482a-b882-5cb28395f892" (UID: "284d8b10-4a92-482a-b882-5cb28395f892"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.531128 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284d8b10-4a92-482a-b882-5cb28395f892-kube-api-access-cpwn6" (OuterVolumeSpecName: "kube-api-access-cpwn6") pod "284d8b10-4a92-482a-b882-5cb28395f892" (UID: "284d8b10-4a92-482a-b882-5cb28395f892"). InnerVolumeSpecName "kube-api-access-cpwn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.532795 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284d8b10-4a92-482a-b882-5cb28395f892-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "284d8b10-4a92-482a-b882-5cb28395f892" (UID: "284d8b10-4a92-482a-b882-5cb28395f892"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.626303 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/284d8b10-4a92-482a-b882-5cb28395f892-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.626553 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/284d8b10-4a92-482a-b882-5cb28395f892-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 08:15:03 crc kubenswrapper[5016]: I1011 08:15:03.626564 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpwn6\" (UniqueName: \"kubernetes.io/projected/284d8b10-4a92-482a-b882-5cb28395f892-kube-api-access-cpwn6\") on node \"crc\" DevicePath \"\"" Oct 11 08:15:04 crc kubenswrapper[5016]: I1011 08:15:04.066077 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" event={"ID":"284d8b10-4a92-482a-b882-5cb28395f892","Type":"ContainerDied","Data":"062302cd8c17c8a02e14ceed51bcc596a9d6d5e12176613772ff9bf75f568344"} Oct 11 08:15:04 crc kubenswrapper[5016]: I1011 08:15:04.066117 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062302cd8c17c8a02e14ceed51bcc596a9d6d5e12176613772ff9bf75f568344" Oct 11 08:15:04 crc kubenswrapper[5016]: I1011 08:15:04.066222 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j" Oct 11 08:15:04 crc kubenswrapper[5016]: I1011 08:15:04.534622 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx"] Oct 11 08:15:04 crc kubenswrapper[5016]: I1011 08:15:04.540473 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336130-mtlhx"] Oct 11 08:15:05 crc kubenswrapper[5016]: I1011 08:15:05.149199 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2fca8b5-8ccb-4100-8570-82b07bdae3ee" path="/var/lib/kubelet/pods/a2fca8b5-8ccb-4100-8570-82b07bdae3ee/volumes" Oct 11 08:15:07 crc kubenswrapper[5016]: I1011 08:15:07.122205 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:15:07 crc kubenswrapper[5016]: I1011 08:15:07.122618 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:15:37 crc kubenswrapper[5016]: I1011 08:15:37.123216 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:15:37 crc kubenswrapper[5016]: I1011 08:15:37.123856 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.240450 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9psf"] Oct 11 08:15:39 crc kubenswrapper[5016]: E1011 08:15:39.241418 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284d8b10-4a92-482a-b882-5cb28395f892" containerName="collect-profiles" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.241443 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="284d8b10-4a92-482a-b882-5cb28395f892" containerName="collect-profiles" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.241817 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="284d8b10-4a92-482a-b882-5cb28395f892" containerName="collect-profiles" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.244048 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.260791 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9psf"] Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.404529 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-utilities\") pod \"redhat-operators-f9psf\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.404845 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-catalog-content\") pod \"redhat-operators-f9psf\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.405013 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hd22\" (UniqueName: \"kubernetes.io/projected/6636b459-4e9e-42d1-820b-83c83cb1be64-kube-api-access-9hd22\") pod \"redhat-operators-f9psf\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.506552 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hd22\" (UniqueName: \"kubernetes.io/projected/6636b459-4e9e-42d1-820b-83c83cb1be64-kube-api-access-9hd22\") pod \"redhat-operators-f9psf\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.506785 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-utilities\") pod \"redhat-operators-f9psf\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.506841 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-catalog-content\") pod \"redhat-operators-f9psf\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.507711 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-catalog-content\") pod \"redhat-operators-f9psf\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.507841 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-utilities\") pod \"redhat-operators-f9psf\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.527982 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hd22\" (UniqueName: \"kubernetes.io/projected/6636b459-4e9e-42d1-820b-83c83cb1be64-kube-api-access-9hd22\") pod \"redhat-operators-f9psf\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:39 crc kubenswrapper[5016]: I1011 08:15:39.576176 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:40 crc kubenswrapper[5016]: I1011 08:15:40.021794 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9psf"] Oct 11 08:15:40 crc kubenswrapper[5016]: I1011 08:15:40.454990 5016 generic.go:334] "Generic (PLEG): container finished" podID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerID="49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96" exitCode=0 Oct 11 08:15:40 crc kubenswrapper[5016]: I1011 08:15:40.455035 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9psf" event={"ID":"6636b459-4e9e-42d1-820b-83c83cb1be64","Type":"ContainerDied","Data":"49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96"} Oct 11 08:15:40 crc kubenswrapper[5016]: I1011 08:15:40.455062 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9psf" event={"ID":"6636b459-4e9e-42d1-820b-83c83cb1be64","Type":"ContainerStarted","Data":"3ad7537933540ef0ea1bd69e44bf290661d47252bb332733a8a5eb940ea04087"} Oct 11 08:15:40 crc kubenswrapper[5016]: I1011 08:15:40.590396 5016 scope.go:117] "RemoveContainer" containerID="49acf16ebe07ce43cd06d60190763591dd2f413ebc52bf1f1cce44684493a7b0" Oct 11 08:15:40 crc kubenswrapper[5016]: I1011 08:15:40.621810 5016 scope.go:117] "RemoveContainer" containerID="75a011fcbc4c849ec1e506fbdc328a7fc66a856e7a8b26e53b7ee3501bef9b13" Oct 11 08:15:41 crc kubenswrapper[5016]: I1011 08:15:41.480454 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9psf" event={"ID":"6636b459-4e9e-42d1-820b-83c83cb1be64","Type":"ContainerStarted","Data":"facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386"} Oct 11 08:15:43 crc kubenswrapper[5016]: I1011 08:15:43.507064 5016 generic.go:334] "Generic (PLEG): container finished" podID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerID="facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386" exitCode=0 Oct 11 08:15:43 crc kubenswrapper[5016]: I1011 08:15:43.507174 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9psf" event={"ID":"6636b459-4e9e-42d1-820b-83c83cb1be64","Type":"ContainerDied","Data":"facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386"} Oct 11 08:15:44 crc kubenswrapper[5016]: I1011 08:15:44.520561 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9psf" event={"ID":"6636b459-4e9e-42d1-820b-83c83cb1be64","Type":"ContainerStarted","Data":"b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c"} Oct 11 08:15:44 crc kubenswrapper[5016]: I1011 08:15:44.554767 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9psf" podStartSLOduration=1.937063168 podStartE2EDuration="5.554726956s" podCreationTimestamp="2025-10-11 08:15:39 +0000 UTC" firstStartedPulling="2025-10-11 08:15:40.456646465 +0000 UTC m=+2128.357102411" lastFinishedPulling="2025-10-11 08:15:44.074310213 +0000 UTC m=+2131.974766199" observedRunningTime="2025-10-11 08:15:44.547091946 +0000 UTC m=+2132.447547892" watchObservedRunningTime="2025-10-11 08:15:44.554726956 +0000 UTC m=+2132.455182932" Oct 11 08:15:49 crc kubenswrapper[5016]: I1011 08:15:49.576489 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:49 crc kubenswrapper[5016]: I1011 08:15:49.577368 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:49 crc kubenswrapper[5016]: I1011 08:15:49.653633 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:50 crc kubenswrapper[5016]: I1011 08:15:50.668479 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:50 crc kubenswrapper[5016]: I1011 08:15:50.721224 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9psf"] Oct 11 08:15:52 crc kubenswrapper[5016]: I1011 08:15:52.634715 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9psf" podUID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerName="registry-server" containerID="cri-o://b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c" gracePeriod=2 Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.126489 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.228472 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hd22\" (UniqueName: \"kubernetes.io/projected/6636b459-4e9e-42d1-820b-83c83cb1be64-kube-api-access-9hd22\") pod \"6636b459-4e9e-42d1-820b-83c83cb1be64\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.228621 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-catalog-content\") pod \"6636b459-4e9e-42d1-820b-83c83cb1be64\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.228758 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-utilities\") pod \"6636b459-4e9e-42d1-820b-83c83cb1be64\" (UID: \"6636b459-4e9e-42d1-820b-83c83cb1be64\") " Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.230515 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-utilities" (OuterVolumeSpecName: "utilities") pod "6636b459-4e9e-42d1-820b-83c83cb1be64" (UID: "6636b459-4e9e-42d1-820b-83c83cb1be64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.238116 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6636b459-4e9e-42d1-820b-83c83cb1be64-kube-api-access-9hd22" (OuterVolumeSpecName: "kube-api-access-9hd22") pod "6636b459-4e9e-42d1-820b-83c83cb1be64" (UID: "6636b459-4e9e-42d1-820b-83c83cb1be64"). InnerVolumeSpecName "kube-api-access-9hd22". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.331197 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.331236 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hd22\" (UniqueName: \"kubernetes.io/projected/6636b459-4e9e-42d1-820b-83c83cb1be64-kube-api-access-9hd22\") on node \"crc\" DevicePath \"\"" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.331648 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6636b459-4e9e-42d1-820b-83c83cb1be64" (UID: "6636b459-4e9e-42d1-820b-83c83cb1be64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.433588 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6636b459-4e9e-42d1-820b-83c83cb1be64-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.658073 5016 generic.go:334] "Generic (PLEG): container finished" podID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerID="b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c" exitCode=0 Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.658137 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9psf" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.658140 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9psf" event={"ID":"6636b459-4e9e-42d1-820b-83c83cb1be64","Type":"ContainerDied","Data":"b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c"} Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.658303 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9psf" event={"ID":"6636b459-4e9e-42d1-820b-83c83cb1be64","Type":"ContainerDied","Data":"3ad7537933540ef0ea1bd69e44bf290661d47252bb332733a8a5eb940ea04087"} Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.658339 5016 scope.go:117] "RemoveContainer" containerID="b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.707970 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9psf"] Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.722205 5016 scope.go:117] "RemoveContainer" containerID="facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.724930 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9psf"] Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.768400 5016 scope.go:117] "RemoveContainer" containerID="49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.803978 5016 scope.go:117] "RemoveContainer" containerID="b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c" Oct 11 08:15:53 crc kubenswrapper[5016]: E1011 08:15:53.804744 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c\": container with ID starting with b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c not found: ID does not exist" containerID="b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.804823 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c"} err="failed to get container status \"b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c\": rpc error: code = NotFound desc = could not find container \"b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c\": container with ID starting with b5ac3a7165d6b20fc2fd7f72f09699987a159cc638fbf5d9204a4122b51ef34c not found: ID does not exist" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.804892 5016 scope.go:117] "RemoveContainer" containerID="facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386" Oct 11 08:15:53 crc kubenswrapper[5016]: E1011 08:15:53.805282 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386\": container with ID starting with facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386 not found: ID does not exist" containerID="facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.805328 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386"} err="failed to get container status \"facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386\": rpc error: code = NotFound desc = could not find container \"facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386\": container with ID starting with facc7fb9c3df7f8677c92170b980bb2a9fb6ac4aa26263683ca46892573e7386 not found: ID does not exist" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.805359 5016 scope.go:117] "RemoveContainer" containerID="49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96" Oct 11 08:15:53 crc kubenswrapper[5016]: E1011 08:15:53.805873 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96\": container with ID starting with 49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96 not found: ID does not exist" containerID="49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96" Oct 11 08:15:53 crc kubenswrapper[5016]: I1011 08:15:53.805906 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96"} err="failed to get container status \"49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96\": rpc error: code = NotFound desc = could not find container \"49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96\": container with ID starting with 49dc19eed51127d2b661b96166989aff5f700c31243be63a82ef12b45133ec96 not found: ID does not exist" Oct 11 08:15:55 crc kubenswrapper[5016]: I1011 08:15:55.151508 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6636b459-4e9e-42d1-820b-83c83cb1be64" path="/var/lib/kubelet/pods/6636b459-4e9e-42d1-820b-83c83cb1be64/volumes" Oct 11 08:16:07 crc kubenswrapper[5016]: I1011 08:16:07.122051 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:16:07 crc kubenswrapper[5016]: I1011 08:16:07.122838 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:16:07 crc kubenswrapper[5016]: I1011 08:16:07.122884 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:16:07 crc kubenswrapper[5016]: I1011 08:16:07.123544 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:16:07 crc kubenswrapper[5016]: I1011 08:16:07.123602 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" gracePeriod=600 Oct 11 08:16:07 crc kubenswrapper[5016]: E1011 08:16:07.260556 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:16:07 crc kubenswrapper[5016]: I1011 08:16:07.813067 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" exitCode=0 Oct 11 08:16:07 crc kubenswrapper[5016]: I1011 08:16:07.813129 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65"} Oct 11 08:16:07 crc kubenswrapper[5016]: I1011 08:16:07.813171 5016 scope.go:117] "RemoveContainer" containerID="59069795b31b315c02b6409215974f97ef50f882460c0e4d8076522e3cbf39b6" Oct 11 08:16:07 crc kubenswrapper[5016]: I1011 08:16:07.814474 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:16:07 crc kubenswrapper[5016]: E1011 08:16:07.815014 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:16:23 crc kubenswrapper[5016]: I1011 08:16:23.134083 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:16:23 crc kubenswrapper[5016]: E1011 08:16:23.135516 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:16:24 crc kubenswrapper[5016]: I1011 08:16:24.009812 5016 generic.go:334] "Generic (PLEG): container finished" podID="729b92c8-604e-4a61-b146-f0f4dc9d00d5" containerID="7e8fe5f00d243c8231b2f59cae54e329d1daba4aa7cd2f37c9bf50d900e1f5b2" exitCode=0 Oct 11 08:16:24 crc kubenswrapper[5016]: I1011 08:16:24.009895 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" event={"ID":"729b92c8-604e-4a61-b146-f0f4dc9d00d5","Type":"ContainerDied","Data":"7e8fe5f00d243c8231b2f59cae54e329d1daba4aa7cd2f37c9bf50d900e1f5b2"} Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.557801 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.673725 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ceph\") pod \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.673792 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ssh-key\") pod \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.673862 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-bootstrap-combined-ca-bundle\") pod \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.673950 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5pk8\" (UniqueName: \"kubernetes.io/projected/729b92c8-604e-4a61-b146-f0f4dc9d00d5-kube-api-access-k5pk8\") pod \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.674818 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-inventory\") pod \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\" (UID: \"729b92c8-604e-4a61-b146-f0f4dc9d00d5\") " Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.682723 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729b92c8-604e-4a61-b146-f0f4dc9d00d5-kube-api-access-k5pk8" (OuterVolumeSpecName: "kube-api-access-k5pk8") pod "729b92c8-604e-4a61-b146-f0f4dc9d00d5" (UID: "729b92c8-604e-4a61-b146-f0f4dc9d00d5"). InnerVolumeSpecName "kube-api-access-k5pk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.682936 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ceph" (OuterVolumeSpecName: "ceph") pod "729b92c8-604e-4a61-b146-f0f4dc9d00d5" (UID: "729b92c8-604e-4a61-b146-f0f4dc9d00d5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.682949 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "729b92c8-604e-4a61-b146-f0f4dc9d00d5" (UID: "729b92c8-604e-4a61-b146-f0f4dc9d00d5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.727682 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-inventory" (OuterVolumeSpecName: "inventory") pod "729b92c8-604e-4a61-b146-f0f4dc9d00d5" (UID: "729b92c8-604e-4a61-b146-f0f4dc9d00d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.737111 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "729b92c8-604e-4a61-b146-f0f4dc9d00d5" (UID: "729b92c8-604e-4a61-b146-f0f4dc9d00d5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.777697 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.777730 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.777743 5016 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.777753 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5pk8\" (UniqueName: \"kubernetes.io/projected/729b92c8-604e-4a61-b146-f0f4dc9d00d5-kube-api-access-k5pk8\") on node \"crc\" DevicePath \"\"" Oct 11 08:16:25 crc kubenswrapper[5016]: I1011 08:16:25.777762 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/729b92c8-604e-4a61-b146-f0f4dc9d00d5-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.034853 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" event={"ID":"729b92c8-604e-4a61-b146-f0f4dc9d00d5","Type":"ContainerDied","Data":"37d91abf9b7e5e895df4e1b7a2c5ce78f2571353980a33b68885ceb1a2608824"} Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.034919 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.034947 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37d91abf9b7e5e895df4e1b7a2c5ce78f2571353980a33b68885ceb1a2608824" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.176641 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc"] Oct 11 08:16:26 crc kubenswrapper[5016]: E1011 08:16:26.177079 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerName="extract-utilities" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.177101 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerName="extract-utilities" Oct 11 08:16:26 crc kubenswrapper[5016]: E1011 08:16:26.177114 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerName="registry-server" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.177122 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerName="registry-server" Oct 11 08:16:26 crc kubenswrapper[5016]: E1011 08:16:26.177155 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729b92c8-604e-4a61-b146-f0f4dc9d00d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.177167 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="729b92c8-604e-4a61-b146-f0f4dc9d00d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 11 08:16:26 crc kubenswrapper[5016]: E1011 08:16:26.177179 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerName="extract-content" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.177187 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerName="extract-content" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.177409 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="729b92c8-604e-4a61-b146-f0f4dc9d00d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.177439 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="6636b459-4e9e-42d1-820b-83c83cb1be64" containerName="registry-server" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.178146 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.180784 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.181234 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.181988 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.182280 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.182800 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.191554 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc"] Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.286492 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.286698 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jcj\" (UniqueName: \"kubernetes.io/projected/3618a2af-0e01-4e8e-858b-1096d1e36f7c-kube-api-access-b4jcj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.286733 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.286994 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.388793 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.389288 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.389392 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.389993 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jcj\" (UniqueName: \"kubernetes.io/projected/3618a2af-0e01-4e8e-858b-1096d1e36f7c-kube-api-access-b4jcj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.394339 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.395048 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.395044 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.414223 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jcj\" (UniqueName: \"kubernetes.io/projected/3618a2af-0e01-4e8e-858b-1096d1e36f7c-kube-api-access-b4jcj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lpplc\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:26 crc kubenswrapper[5016]: I1011 08:16:26.501079 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:27 crc kubenswrapper[5016]: I1011 08:16:27.124329 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc"] Oct 11 08:16:28 crc kubenswrapper[5016]: I1011 08:16:28.064510 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" event={"ID":"3618a2af-0e01-4e8e-858b-1096d1e36f7c","Type":"ContainerStarted","Data":"2ffd115c862304db1347f2c328d4f2c57e7d3ef718eda76c054de8df844d1dd1"} Oct 11 08:16:28 crc kubenswrapper[5016]: I1011 08:16:28.065302 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" event={"ID":"3618a2af-0e01-4e8e-858b-1096d1e36f7c","Type":"ContainerStarted","Data":"03883c464ab2ddda1f9b84967f220d13e07c45c73ef06253708bb0401d0b0049"} Oct 11 08:16:28 crc kubenswrapper[5016]: I1011 08:16:28.089877 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" podStartSLOduration=1.450629933 podStartE2EDuration="2.089854701s" podCreationTimestamp="2025-10-11 08:16:26 +0000 UTC" firstStartedPulling="2025-10-11 08:16:27.139234642 +0000 UTC m=+2175.039690608" lastFinishedPulling="2025-10-11 08:16:27.77845943 +0000 UTC m=+2175.678915376" observedRunningTime="2025-10-11 08:16:28.088602308 +0000 UTC m=+2175.989058264" watchObservedRunningTime="2025-10-11 08:16:28.089854701 +0000 UTC m=+2175.990310657" Oct 11 08:16:37 crc kubenswrapper[5016]: I1011 08:16:37.134133 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:16:37 crc kubenswrapper[5016]: E1011 08:16:37.136341 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:16:51 crc kubenswrapper[5016]: I1011 08:16:51.140958 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:16:51 crc kubenswrapper[5016]: E1011 08:16:51.142780 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:16:55 crc kubenswrapper[5016]: I1011 08:16:55.340264 5016 generic.go:334] "Generic (PLEG): container finished" podID="3618a2af-0e01-4e8e-858b-1096d1e36f7c" containerID="2ffd115c862304db1347f2c328d4f2c57e7d3ef718eda76c054de8df844d1dd1" exitCode=0 Oct 11 08:16:55 crc kubenswrapper[5016]: I1011 08:16:55.340359 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" event={"ID":"3618a2af-0e01-4e8e-858b-1096d1e36f7c","Type":"ContainerDied","Data":"2ffd115c862304db1347f2c328d4f2c57e7d3ef718eda76c054de8df844d1dd1"} Oct 11 08:16:56 crc kubenswrapper[5016]: I1011 08:16:56.872287 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.002027 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ssh-key\") pod \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.002122 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4jcj\" (UniqueName: \"kubernetes.io/projected/3618a2af-0e01-4e8e-858b-1096d1e36f7c-kube-api-access-b4jcj\") pod \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.002160 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ceph\") pod \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.002192 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-inventory\") pod \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\" (UID: \"3618a2af-0e01-4e8e-858b-1096d1e36f7c\") " Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.009120 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3618a2af-0e01-4e8e-858b-1096d1e36f7c-kube-api-access-b4jcj" (OuterVolumeSpecName: "kube-api-access-b4jcj") pod "3618a2af-0e01-4e8e-858b-1096d1e36f7c" (UID: "3618a2af-0e01-4e8e-858b-1096d1e36f7c"). InnerVolumeSpecName "kube-api-access-b4jcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.009365 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ceph" (OuterVolumeSpecName: "ceph") pod "3618a2af-0e01-4e8e-858b-1096d1e36f7c" (UID: "3618a2af-0e01-4e8e-858b-1096d1e36f7c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.035001 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3618a2af-0e01-4e8e-858b-1096d1e36f7c" (UID: "3618a2af-0e01-4e8e-858b-1096d1e36f7c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.066731 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-inventory" (OuterVolumeSpecName: "inventory") pod "3618a2af-0e01-4e8e-858b-1096d1e36f7c" (UID: "3618a2af-0e01-4e8e-858b-1096d1e36f7c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.113465 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.113510 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4jcj\" (UniqueName: \"kubernetes.io/projected/3618a2af-0e01-4e8e-858b-1096d1e36f7c-kube-api-access-b4jcj\") on node \"crc\" DevicePath \"\"" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.113526 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.113539 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3618a2af-0e01-4e8e-858b-1096d1e36f7c-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.364132 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" event={"ID":"3618a2af-0e01-4e8e-858b-1096d1e36f7c","Type":"ContainerDied","Data":"03883c464ab2ddda1f9b84967f220d13e07c45c73ef06253708bb0401d0b0049"} Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.364183 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03883c464ab2ddda1f9b84967f220d13e07c45c73ef06253708bb0401d0b0049" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.364261 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lpplc" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.471527 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx"] Oct 11 08:16:57 crc kubenswrapper[5016]: E1011 08:16:57.471974 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3618a2af-0e01-4e8e-858b-1096d1e36f7c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.472000 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="3618a2af-0e01-4e8e-858b-1096d1e36f7c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.472248 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="3618a2af-0e01-4e8e-858b-1096d1e36f7c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.473055 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.475316 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.475512 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.479300 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.480939 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.481376 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.499801 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx"] Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.521988 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.522105 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.522140 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh42w\" (UniqueName: \"kubernetes.io/projected/87dcedab-e03e-4507-adc3-90a88862ca5e-kube-api-access-lh42w\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.522296 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.623401 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.623462 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh42w\" (UniqueName: \"kubernetes.io/projected/87dcedab-e03e-4507-adc3-90a88862ca5e-kube-api-access-lh42w\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.623545 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.623590 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.628876 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.629055 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.629944 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.648369 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh42w\" (UniqueName: \"kubernetes.io/projected/87dcedab-e03e-4507-adc3-90a88862ca5e-kube-api-access-lh42w\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vktkx\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:57 crc kubenswrapper[5016]: I1011 08:16:57.789539 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:16:58 crc kubenswrapper[5016]: I1011 08:16:58.420508 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx"] Oct 11 08:16:59 crc kubenswrapper[5016]: I1011 08:16:59.383949 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" event={"ID":"87dcedab-e03e-4507-adc3-90a88862ca5e","Type":"ContainerStarted","Data":"845ae6363c53ca61d790443f759e226c194276072f72b96a2051bd3e17f2f800"} Oct 11 08:16:59 crc kubenswrapper[5016]: I1011 08:16:59.384322 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" event={"ID":"87dcedab-e03e-4507-adc3-90a88862ca5e","Type":"ContainerStarted","Data":"1874f2a37018e97aa8b1817638425c4cc302674c5819029674eda285de6e691a"} Oct 11 08:16:59 crc kubenswrapper[5016]: I1011 08:16:59.417722 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" podStartSLOduration=1.9476650229999999 podStartE2EDuration="2.417689942s" podCreationTimestamp="2025-10-11 08:16:57 +0000 UTC" firstStartedPulling="2025-10-11 08:16:58.434139626 +0000 UTC m=+2206.334595572" lastFinishedPulling="2025-10-11 08:16:58.904164545 +0000 UTC m=+2206.804620491" observedRunningTime="2025-10-11 08:16:59.406962279 +0000 UTC m=+2207.307418275" watchObservedRunningTime="2025-10-11 08:16:59.417689942 +0000 UTC m=+2207.318145918" Oct 11 08:17:05 crc kubenswrapper[5016]: I1011 08:17:05.467429 5016 generic.go:334] "Generic (PLEG): container finished" podID="87dcedab-e03e-4507-adc3-90a88862ca5e" containerID="845ae6363c53ca61d790443f759e226c194276072f72b96a2051bd3e17f2f800" exitCode=0 Oct 11 08:17:05 crc kubenswrapper[5016]: I1011 08:17:05.467565 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" event={"ID":"87dcedab-e03e-4507-adc3-90a88862ca5e","Type":"ContainerDied","Data":"845ae6363c53ca61d790443f759e226c194276072f72b96a2051bd3e17f2f800"} Oct 11 08:17:06 crc kubenswrapper[5016]: I1011 08:17:06.134960 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:17:06 crc kubenswrapper[5016]: E1011 08:17:06.136225 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:17:06 crc kubenswrapper[5016]: I1011 08:17:06.946401 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:17:06 crc kubenswrapper[5016]: I1011 08:17:06.951933 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ssh-key\") pod \"87dcedab-e03e-4507-adc3-90a88862ca5e\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " Oct 11 08:17:06 crc kubenswrapper[5016]: I1011 08:17:06.952003 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-inventory\") pod \"87dcedab-e03e-4507-adc3-90a88862ca5e\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " Oct 11 08:17:06 crc kubenswrapper[5016]: I1011 08:17:06.952175 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh42w\" (UniqueName: \"kubernetes.io/projected/87dcedab-e03e-4507-adc3-90a88862ca5e-kube-api-access-lh42w\") pod \"87dcedab-e03e-4507-adc3-90a88862ca5e\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " Oct 11 08:17:06 crc kubenswrapper[5016]: I1011 08:17:06.952262 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ceph\") pod \"87dcedab-e03e-4507-adc3-90a88862ca5e\" (UID: \"87dcedab-e03e-4507-adc3-90a88862ca5e\") " Oct 11 08:17:06 crc kubenswrapper[5016]: I1011 08:17:06.962867 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87dcedab-e03e-4507-adc3-90a88862ca5e-kube-api-access-lh42w" (OuterVolumeSpecName: "kube-api-access-lh42w") pod "87dcedab-e03e-4507-adc3-90a88862ca5e" (UID: "87dcedab-e03e-4507-adc3-90a88862ca5e"). InnerVolumeSpecName "kube-api-access-lh42w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:17:06 crc kubenswrapper[5016]: I1011 08:17:06.963132 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ceph" (OuterVolumeSpecName: "ceph") pod "87dcedab-e03e-4507-adc3-90a88862ca5e" (UID: "87dcedab-e03e-4507-adc3-90a88862ca5e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.010531 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-inventory" (OuterVolumeSpecName: "inventory") pod "87dcedab-e03e-4507-adc3-90a88862ca5e" (UID: "87dcedab-e03e-4507-adc3-90a88862ca5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.013783 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87dcedab-e03e-4507-adc3-90a88862ca5e" (UID: "87dcedab-e03e-4507-adc3-90a88862ca5e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.055012 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.055044 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.055056 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh42w\" (UniqueName: \"kubernetes.io/projected/87dcedab-e03e-4507-adc3-90a88862ca5e-kube-api-access-lh42w\") on node \"crc\" DevicePath \"\"" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.055069 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/87dcedab-e03e-4507-adc3-90a88862ca5e-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.495997 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" event={"ID":"87dcedab-e03e-4507-adc3-90a88862ca5e","Type":"ContainerDied","Data":"1874f2a37018e97aa8b1817638425c4cc302674c5819029674eda285de6e691a"} Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.496054 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1874f2a37018e97aa8b1817638425c4cc302674c5819029674eda285de6e691a" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.496109 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vktkx" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.760916 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn"] Oct 11 08:17:07 crc kubenswrapper[5016]: E1011 08:17:07.761768 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87dcedab-e03e-4507-adc3-90a88862ca5e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.761804 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="87dcedab-e03e-4507-adc3-90a88862ca5e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.762105 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="87dcedab-e03e-4507-adc3-90a88862ca5e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.765076 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.769534 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.769619 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.769951 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.770009 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.775260 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.782128 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn"] Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.873748 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.873801 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.873840 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bpp2\" (UniqueName: \"kubernetes.io/projected/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-kube-api-access-4bpp2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.873939 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.976248 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.976330 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.976378 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bpp2\" (UniqueName: \"kubernetes.io/projected/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-kube-api-access-4bpp2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.976501 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.984938 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.985435 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:07 crc kubenswrapper[5016]: I1011 08:17:07.991266 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:08 crc kubenswrapper[5016]: I1011 08:17:08.005418 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bpp2\" (UniqueName: \"kubernetes.io/projected/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-kube-api-access-4bpp2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cc7wn\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:08 crc kubenswrapper[5016]: I1011 08:17:08.092444 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:08 crc kubenswrapper[5016]: I1011 08:17:08.719996 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn"] Oct 11 08:17:09 crc kubenswrapper[5016]: I1011 08:17:09.520592 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" event={"ID":"f552b6e7-65bd-47c2-8e62-068c1f04cb3e","Type":"ContainerStarted","Data":"aafff316ecf8261f055fce95816430fab6d6be247d86d6892dddb64d2d62704f"} Oct 11 08:17:09 crc kubenswrapper[5016]: I1011 08:17:09.521101 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" event={"ID":"f552b6e7-65bd-47c2-8e62-068c1f04cb3e","Type":"ContainerStarted","Data":"3b901444bb5adb08a3494dcf0eb13d17b29aa8641767447ba08303a0856cdb0b"} Oct 11 08:17:09 crc kubenswrapper[5016]: I1011 08:17:09.550470 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" podStartSLOduration=2.023532496 podStartE2EDuration="2.550441344s" podCreationTimestamp="2025-10-11 08:17:07 +0000 UTC" firstStartedPulling="2025-10-11 08:17:08.732476619 +0000 UTC m=+2216.632932565" lastFinishedPulling="2025-10-11 08:17:09.259385427 +0000 UTC m=+2217.159841413" observedRunningTime="2025-10-11 08:17:09.5393039 +0000 UTC m=+2217.439759856" watchObservedRunningTime="2025-10-11 08:17:09.550441344 +0000 UTC m=+2217.450897310" Oct 11 08:17:19 crc kubenswrapper[5016]: I1011 08:17:19.133605 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:17:19 crc kubenswrapper[5016]: E1011 08:17:19.134483 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:17:32 crc kubenswrapper[5016]: I1011 08:17:32.134105 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:17:32 crc kubenswrapper[5016]: E1011 08:17:32.135194 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:17:45 crc kubenswrapper[5016]: I1011 08:17:45.134240 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:17:45 crc kubenswrapper[5016]: E1011 08:17:45.135454 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:17:50 crc kubenswrapper[5016]: I1011 08:17:50.990479 5016 generic.go:334] "Generic (PLEG): container finished" podID="f552b6e7-65bd-47c2-8e62-068c1f04cb3e" containerID="aafff316ecf8261f055fce95816430fab6d6be247d86d6892dddb64d2d62704f" exitCode=0 Oct 11 08:17:50 crc kubenswrapper[5016]: I1011 08:17:50.990591 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" event={"ID":"f552b6e7-65bd-47c2-8e62-068c1f04cb3e","Type":"ContainerDied","Data":"aafff316ecf8261f055fce95816430fab6d6be247d86d6892dddb64d2d62704f"} Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.520507 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.650416 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ssh-key\") pod \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.650636 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bpp2\" (UniqueName: \"kubernetes.io/projected/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-kube-api-access-4bpp2\") pod \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.650699 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-inventory\") pod \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.650953 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ceph\") pod \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\" (UID: \"f552b6e7-65bd-47c2-8e62-068c1f04cb3e\") " Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.660901 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-kube-api-access-4bpp2" (OuterVolumeSpecName: "kube-api-access-4bpp2") pod "f552b6e7-65bd-47c2-8e62-068c1f04cb3e" (UID: "f552b6e7-65bd-47c2-8e62-068c1f04cb3e"). InnerVolumeSpecName "kube-api-access-4bpp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.660904 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ceph" (OuterVolumeSpecName: "ceph") pod "f552b6e7-65bd-47c2-8e62-068c1f04cb3e" (UID: "f552b6e7-65bd-47c2-8e62-068c1f04cb3e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.686004 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f552b6e7-65bd-47c2-8e62-068c1f04cb3e" (UID: "f552b6e7-65bd-47c2-8e62-068c1f04cb3e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.701362 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-inventory" (OuterVolumeSpecName: "inventory") pod "f552b6e7-65bd-47c2-8e62-068c1f04cb3e" (UID: "f552b6e7-65bd-47c2-8e62-068c1f04cb3e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.755148 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bpp2\" (UniqueName: \"kubernetes.io/projected/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-kube-api-access-4bpp2\") on node \"crc\" DevicePath \"\"" Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.755215 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.755235 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:17:52 crc kubenswrapper[5016]: I1011 08:17:52.755254 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f552b6e7-65bd-47c2-8e62-068c1f04cb3e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.027147 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" event={"ID":"f552b6e7-65bd-47c2-8e62-068c1f04cb3e","Type":"ContainerDied","Data":"3b901444bb5adb08a3494dcf0eb13d17b29aa8641767447ba08303a0856cdb0b"} Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.027226 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b901444bb5adb08a3494dcf0eb13d17b29aa8641767447ba08303a0856cdb0b" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.027317 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cc7wn" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.162728 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2"] Oct 11 08:17:53 crc kubenswrapper[5016]: E1011 08:17:53.163225 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f552b6e7-65bd-47c2-8e62-068c1f04cb3e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.163262 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f552b6e7-65bd-47c2-8e62-068c1f04cb3e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.163561 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f552b6e7-65bd-47c2-8e62-068c1f04cb3e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.164500 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2"] Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.164642 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.176730 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.177025 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.177312 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.177475 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.177765 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.279547 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.279630 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.279692 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.279787 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86pgf\" (UniqueName: \"kubernetes.io/projected/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-kube-api-access-86pgf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.382013 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.382071 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.382094 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.382128 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86pgf\" (UniqueName: \"kubernetes.io/projected/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-kube-api-access-86pgf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.390750 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.392390 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.394643 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.411309 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86pgf\" (UniqueName: \"kubernetes.io/projected/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-kube-api-access-86pgf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:53 crc kubenswrapper[5016]: I1011 08:17:53.507142 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:17:54 crc kubenswrapper[5016]: I1011 08:17:54.131030 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2"] Oct 11 08:17:55 crc kubenswrapper[5016]: I1011 08:17:55.057616 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" event={"ID":"c7f1dbc5-8326-4481-ac16-2f6737dd82b2","Type":"ContainerStarted","Data":"3b53fdb81ae192d1e2d60d728f9a004c1bcc4a7261d450195da1ced18a60cc91"} Oct 11 08:17:55 crc kubenswrapper[5016]: I1011 08:17:55.058066 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" event={"ID":"c7f1dbc5-8326-4481-ac16-2f6737dd82b2","Type":"ContainerStarted","Data":"3d372ab3a693ac4cce6b932102c72bbded4ae71f10331520e83ffd01cc72f25a"} Oct 11 08:17:55 crc kubenswrapper[5016]: I1011 08:17:55.091781 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" podStartSLOduration=1.602260647 podStartE2EDuration="2.09176062s" podCreationTimestamp="2025-10-11 08:17:53 +0000 UTC" firstStartedPulling="2025-10-11 08:17:54.153874247 +0000 UTC m=+2262.054330203" lastFinishedPulling="2025-10-11 08:17:54.64337421 +0000 UTC m=+2262.543830176" observedRunningTime="2025-10-11 08:17:55.085565428 +0000 UTC m=+2262.986021454" watchObservedRunningTime="2025-10-11 08:17:55.09176062 +0000 UTC m=+2262.992216576" Oct 11 08:17:58 crc kubenswrapper[5016]: I1011 08:17:58.136888 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:17:58 crc kubenswrapper[5016]: E1011 08:17:58.138190 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:18:00 crc kubenswrapper[5016]: I1011 08:18:00.119057 5016 generic.go:334] "Generic (PLEG): container finished" podID="c7f1dbc5-8326-4481-ac16-2f6737dd82b2" containerID="3b53fdb81ae192d1e2d60d728f9a004c1bcc4a7261d450195da1ced18a60cc91" exitCode=0 Oct 11 08:18:00 crc kubenswrapper[5016]: I1011 08:18:00.119208 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" event={"ID":"c7f1dbc5-8326-4481-ac16-2f6737dd82b2","Type":"ContainerDied","Data":"3b53fdb81ae192d1e2d60d728f9a004c1bcc4a7261d450195da1ced18a60cc91"} Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.551925 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.692060 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-inventory\") pod \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.692331 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86pgf\" (UniqueName: \"kubernetes.io/projected/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-kube-api-access-86pgf\") pod \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.692420 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ceph\") pod \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.692459 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ssh-key\") pod \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\" (UID: \"c7f1dbc5-8326-4481-ac16-2f6737dd82b2\") " Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.698872 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ceph" (OuterVolumeSpecName: "ceph") pod "c7f1dbc5-8326-4481-ac16-2f6737dd82b2" (UID: "c7f1dbc5-8326-4481-ac16-2f6737dd82b2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.699139 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-kube-api-access-86pgf" (OuterVolumeSpecName: "kube-api-access-86pgf") pod "c7f1dbc5-8326-4481-ac16-2f6737dd82b2" (UID: "c7f1dbc5-8326-4481-ac16-2f6737dd82b2"). InnerVolumeSpecName "kube-api-access-86pgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.726784 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c7f1dbc5-8326-4481-ac16-2f6737dd82b2" (UID: "c7f1dbc5-8326-4481-ac16-2f6737dd82b2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.734739 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-inventory" (OuterVolumeSpecName: "inventory") pod "c7f1dbc5-8326-4481-ac16-2f6737dd82b2" (UID: "c7f1dbc5-8326-4481-ac16-2f6737dd82b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.794317 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86pgf\" (UniqueName: \"kubernetes.io/projected/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-kube-api-access-86pgf\") on node \"crc\" DevicePath \"\"" Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.794617 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.794628 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:18:01 crc kubenswrapper[5016]: I1011 08:18:01.794640 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7f1dbc5-8326-4481-ac16-2f6737dd82b2-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.146638 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" event={"ID":"c7f1dbc5-8326-4481-ac16-2f6737dd82b2","Type":"ContainerDied","Data":"3d372ab3a693ac4cce6b932102c72bbded4ae71f10331520e83ffd01cc72f25a"} Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.147011 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d372ab3a693ac4cce6b932102c72bbded4ae71f10331520e83ffd01cc72f25a" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.146728 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.250361 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9"] Oct 11 08:18:02 crc kubenswrapper[5016]: E1011 08:18:02.250852 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f1dbc5-8326-4481-ac16-2f6737dd82b2" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.250877 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f1dbc5-8326-4481-ac16-2f6737dd82b2" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.251132 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f1dbc5-8326-4481-ac16-2f6737dd82b2" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.251927 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.254462 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.254827 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.254985 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.254995 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.257871 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.264517 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9"] Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.405543 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.405592 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.405650 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8xdk\" (UniqueName: \"kubernetes.io/projected/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-kube-api-access-f8xdk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.406106 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.508130 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.508289 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.508327 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.508418 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8xdk\" (UniqueName: \"kubernetes.io/projected/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-kube-api-access-f8xdk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.513849 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.515576 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.518632 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.527216 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8xdk\" (UniqueName: \"kubernetes.io/projected/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-kube-api-access-f8xdk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:02 crc kubenswrapper[5016]: I1011 08:18:02.575590 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:03 crc kubenswrapper[5016]: I1011 08:18:03.129772 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9"] Oct 11 08:18:03 crc kubenswrapper[5016]: W1011 08:18:03.140396 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac92dbbc_a41a_4471_b3ac_67bffdc8f342.slice/crio-d19c360eb5348727de0f7153bb9ebdcc79a67b047e07b5b4616c9f04b0a5878b WatchSource:0}: Error finding container d19c360eb5348727de0f7153bb9ebdcc79a67b047e07b5b4616c9f04b0a5878b: Status 404 returned error can't find the container with id d19c360eb5348727de0f7153bb9ebdcc79a67b047e07b5b4616c9f04b0a5878b Oct 11 08:18:03 crc kubenswrapper[5016]: I1011 08:18:03.166394 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" event={"ID":"ac92dbbc-a41a-4471-b3ac-67bffdc8f342","Type":"ContainerStarted","Data":"d19c360eb5348727de0f7153bb9ebdcc79a67b047e07b5b4616c9f04b0a5878b"} Oct 11 08:18:04 crc kubenswrapper[5016]: I1011 08:18:04.175472 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" event={"ID":"ac92dbbc-a41a-4471-b3ac-67bffdc8f342","Type":"ContainerStarted","Data":"ae1f33f6db5ab6fafe8868ae56ce31c5d0a8491a938e8b8ba30ad5f7bfc51698"} Oct 11 08:18:04 crc kubenswrapper[5016]: I1011 08:18:04.202331 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" podStartSLOduration=1.7279522840000001 podStartE2EDuration="2.202308928s" podCreationTimestamp="2025-10-11 08:18:02 +0000 UTC" firstStartedPulling="2025-10-11 08:18:03.143698305 +0000 UTC m=+2271.044154261" lastFinishedPulling="2025-10-11 08:18:03.618054919 +0000 UTC m=+2271.518510905" observedRunningTime="2025-10-11 08:18:04.197710067 +0000 UTC m=+2272.098166023" watchObservedRunningTime="2025-10-11 08:18:04.202308928 +0000 UTC m=+2272.102764874" Oct 11 08:18:09 crc kubenswrapper[5016]: I1011 08:18:09.134942 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:18:09 crc kubenswrapper[5016]: E1011 08:18:09.135799 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:18:24 crc kubenswrapper[5016]: I1011 08:18:24.133583 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:18:24 crc kubenswrapper[5016]: E1011 08:18:24.135193 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:18:37 crc kubenswrapper[5016]: I1011 08:18:37.133934 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:18:37 crc kubenswrapper[5016]: E1011 08:18:37.134897 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.708979 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kt4jw"] Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.711182 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.733133 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kt4jw"] Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.784953 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7stb6\" (UniqueName: \"kubernetes.io/projected/35f50b2c-cd40-4799-890b-05f9e2229f7b-kube-api-access-7stb6\") pod \"community-operators-kt4jw\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.785067 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-catalog-content\") pod \"community-operators-kt4jw\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.785105 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-utilities\") pod \"community-operators-kt4jw\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.887159 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7stb6\" (UniqueName: \"kubernetes.io/projected/35f50b2c-cd40-4799-890b-05f9e2229f7b-kube-api-access-7stb6\") pod \"community-operators-kt4jw\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.887278 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-catalog-content\") pod \"community-operators-kt4jw\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.887313 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-utilities\") pod \"community-operators-kt4jw\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.887778 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-catalog-content\") pod \"community-operators-kt4jw\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.887811 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-utilities\") pod \"community-operators-kt4jw\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:48 crc kubenswrapper[5016]: I1011 08:18:48.915402 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7stb6\" (UniqueName: \"kubernetes.io/projected/35f50b2c-cd40-4799-890b-05f9e2229f7b-kube-api-access-7stb6\") pod \"community-operators-kt4jw\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:49 crc kubenswrapper[5016]: I1011 08:18:49.031834 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:49 crc kubenswrapper[5016]: I1011 08:18:49.135450 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:18:49 crc kubenswrapper[5016]: E1011 08:18:49.135728 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:18:49 crc kubenswrapper[5016]: I1011 08:18:49.557518 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kt4jw"] Oct 11 08:18:49 crc kubenswrapper[5016]: I1011 08:18:49.670831 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt4jw" event={"ID":"35f50b2c-cd40-4799-890b-05f9e2229f7b","Type":"ContainerStarted","Data":"76cd32076d8dc012c1b59d633f92c2cafb00a93534bbeb6b6caaf1f379ab860c"} Oct 11 08:18:50 crc kubenswrapper[5016]: I1011 08:18:50.691642 5016 generic.go:334] "Generic (PLEG): container finished" podID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerID="97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298" exitCode=0 Oct 11 08:18:50 crc kubenswrapper[5016]: I1011 08:18:50.691740 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt4jw" event={"ID":"35f50b2c-cd40-4799-890b-05f9e2229f7b","Type":"ContainerDied","Data":"97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298"} Oct 11 08:18:51 crc kubenswrapper[5016]: I1011 08:18:51.728508 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt4jw" event={"ID":"35f50b2c-cd40-4799-890b-05f9e2229f7b","Type":"ContainerStarted","Data":"e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280"} Oct 11 08:18:52 crc kubenswrapper[5016]: I1011 08:18:52.745616 5016 generic.go:334] "Generic (PLEG): container finished" podID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerID="e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280" exitCode=0 Oct 11 08:18:52 crc kubenswrapper[5016]: I1011 08:18:52.746002 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt4jw" event={"ID":"35f50b2c-cd40-4799-890b-05f9e2229f7b","Type":"ContainerDied","Data":"e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280"} Oct 11 08:18:53 crc kubenswrapper[5016]: I1011 08:18:53.776480 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt4jw" event={"ID":"35f50b2c-cd40-4799-890b-05f9e2229f7b","Type":"ContainerStarted","Data":"9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243"} Oct 11 08:18:53 crc kubenswrapper[5016]: I1011 08:18:53.806242 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kt4jw" podStartSLOduration=3.348478277 podStartE2EDuration="5.806221162s" podCreationTimestamp="2025-10-11 08:18:48 +0000 UTC" firstStartedPulling="2025-10-11 08:18:50.694864741 +0000 UTC m=+2318.595320687" lastFinishedPulling="2025-10-11 08:18:53.152607626 +0000 UTC m=+2321.053063572" observedRunningTime="2025-10-11 08:18:53.794476742 +0000 UTC m=+2321.694932688" watchObservedRunningTime="2025-10-11 08:18:53.806221162 +0000 UTC m=+2321.706677098" Oct 11 08:18:57 crc kubenswrapper[5016]: I1011 08:18:57.820405 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" event={"ID":"ac92dbbc-a41a-4471-b3ac-67bffdc8f342","Type":"ContainerDied","Data":"ae1f33f6db5ab6fafe8868ae56ce31c5d0a8491a938e8b8ba30ad5f7bfc51698"} Oct 11 08:18:57 crc kubenswrapper[5016]: I1011 08:18:57.820296 5016 generic.go:334] "Generic (PLEG): container finished" podID="ac92dbbc-a41a-4471-b3ac-67bffdc8f342" containerID="ae1f33f6db5ab6fafe8868ae56ce31c5d0a8491a938e8b8ba30ad5f7bfc51698" exitCode=0 Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.032856 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.033421 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.105691 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.288776 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.356868 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-inventory\") pod \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.357112 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ceph\") pod \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.357148 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8xdk\" (UniqueName: \"kubernetes.io/projected/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-kube-api-access-f8xdk\") pod \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.357273 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ssh-key\") pod \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\" (UID: \"ac92dbbc-a41a-4471-b3ac-67bffdc8f342\") " Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.369887 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ceph" (OuterVolumeSpecName: "ceph") pod "ac92dbbc-a41a-4471-b3ac-67bffdc8f342" (UID: "ac92dbbc-a41a-4471-b3ac-67bffdc8f342"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.372727 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-kube-api-access-f8xdk" (OuterVolumeSpecName: "kube-api-access-f8xdk") pod "ac92dbbc-a41a-4471-b3ac-67bffdc8f342" (UID: "ac92dbbc-a41a-4471-b3ac-67bffdc8f342"). InnerVolumeSpecName "kube-api-access-f8xdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.389879 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-inventory" (OuterVolumeSpecName: "inventory") pod "ac92dbbc-a41a-4471-b3ac-67bffdc8f342" (UID: "ac92dbbc-a41a-4471-b3ac-67bffdc8f342"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.414101 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ac92dbbc-a41a-4471-b3ac-67bffdc8f342" (UID: "ac92dbbc-a41a-4471-b3ac-67bffdc8f342"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.459950 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.459994 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.460004 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.460014 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8xdk\" (UniqueName: \"kubernetes.io/projected/ac92dbbc-a41a-4471-b3ac-67bffdc8f342-kube-api-access-f8xdk\") on node \"crc\" DevicePath \"\"" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.845178 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" event={"ID":"ac92dbbc-a41a-4471-b3ac-67bffdc8f342","Type":"ContainerDied","Data":"d19c360eb5348727de0f7153bb9ebdcc79a67b047e07b5b4616c9f04b0a5878b"} Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.845255 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d19c360eb5348727de0f7153bb9ebdcc79a67b047e07b5b4616c9f04b0a5878b" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.845214 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.945557 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sv6h6"] Oct 11 08:18:59 crc kubenswrapper[5016]: E1011 08:18:59.946184 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac92dbbc-a41a-4471-b3ac-67bffdc8f342" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.946206 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac92dbbc-a41a-4471-b3ac-67bffdc8f342" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.946426 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac92dbbc-a41a-4471-b3ac-67bffdc8f342" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.947459 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.954063 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.954322 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.954400 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.954457 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.954586 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.954909 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.973605 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.973710 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ceph\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.973744 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtk5z\" (UniqueName: \"kubernetes.io/projected/40e86602-48de-424b-a248-ce46f60b770d-kube-api-access-jtk5z\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:18:59 crc kubenswrapper[5016]: I1011 08:18:59.973790 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.007514 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sv6h6"] Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.060144 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kt4jw"] Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.077372 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.077605 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ceph\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.077643 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtk5z\" (UniqueName: \"kubernetes.io/projected/40e86602-48de-424b-a248-ce46f60b770d-kube-api-access-jtk5z\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.077706 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.081927 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.082039 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ceph\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.083236 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.098497 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtk5z\" (UniqueName: \"kubernetes.io/projected/40e86602-48de-424b-a248-ce46f60b770d-kube-api-access-jtk5z\") pod \"ssh-known-hosts-edpm-deployment-sv6h6\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.267245 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:00 crc kubenswrapper[5016]: I1011 08:19:00.960087 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sv6h6"] Oct 11 08:19:01 crc kubenswrapper[5016]: I1011 08:19:01.133580 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:19:01 crc kubenswrapper[5016]: E1011 08:19:01.133993 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:19:01 crc kubenswrapper[5016]: I1011 08:19:01.865591 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" event={"ID":"40e86602-48de-424b-a248-ce46f60b770d","Type":"ContainerStarted","Data":"9fb78ad35d4dcc85f80e350d078133d821f988fcb5eb6bb87c4d8f609d145bd5"} Oct 11 08:19:01 crc kubenswrapper[5016]: I1011 08:19:01.866538 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" event={"ID":"40e86602-48de-424b-a248-ce46f60b770d","Type":"ContainerStarted","Data":"d00cb04bd59f6688ee2ea5e025e5be3a522843641f5c3b91bcb764f8d06b7429"} Oct 11 08:19:01 crc kubenswrapper[5016]: I1011 08:19:01.865892 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kt4jw" podUID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerName="registry-server" containerID="cri-o://9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243" gracePeriod=2 Oct 11 08:19:01 crc kubenswrapper[5016]: I1011 08:19:01.894236 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" podStartSLOduration=2.355867356 podStartE2EDuration="2.894214486s" podCreationTimestamp="2025-10-11 08:18:59 +0000 UTC" firstStartedPulling="2025-10-11 08:19:00.965537135 +0000 UTC m=+2328.865993081" lastFinishedPulling="2025-10-11 08:19:01.503884255 +0000 UTC m=+2329.404340211" observedRunningTime="2025-10-11 08:19:01.888031653 +0000 UTC m=+2329.788487639" watchObservedRunningTime="2025-10-11 08:19:01.894214486 +0000 UTC m=+2329.794670442" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.343884 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.427711 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-utilities\") pod \"35f50b2c-cd40-4799-890b-05f9e2229f7b\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.428282 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-catalog-content\") pod \"35f50b2c-cd40-4799-890b-05f9e2229f7b\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.428321 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7stb6\" (UniqueName: \"kubernetes.io/projected/35f50b2c-cd40-4799-890b-05f9e2229f7b-kube-api-access-7stb6\") pod \"35f50b2c-cd40-4799-890b-05f9e2229f7b\" (UID: \"35f50b2c-cd40-4799-890b-05f9e2229f7b\") " Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.428849 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-utilities" (OuterVolumeSpecName: "utilities") pod "35f50b2c-cd40-4799-890b-05f9e2229f7b" (UID: "35f50b2c-cd40-4799-890b-05f9e2229f7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.434164 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f50b2c-cd40-4799-890b-05f9e2229f7b-kube-api-access-7stb6" (OuterVolumeSpecName: "kube-api-access-7stb6") pod "35f50b2c-cd40-4799-890b-05f9e2229f7b" (UID: "35f50b2c-cd40-4799-890b-05f9e2229f7b"). InnerVolumeSpecName "kube-api-access-7stb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.479020 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35f50b2c-cd40-4799-890b-05f9e2229f7b" (UID: "35f50b2c-cd40-4799-890b-05f9e2229f7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.545270 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.545814 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7stb6\" (UniqueName: \"kubernetes.io/projected/35f50b2c-cd40-4799-890b-05f9e2229f7b-kube-api-access-7stb6\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.545921 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35f50b2c-cd40-4799-890b-05f9e2229f7b-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.875640 5016 generic.go:334] "Generic (PLEG): container finished" podID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerID="9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243" exitCode=0 Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.875691 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt4jw" event={"ID":"35f50b2c-cd40-4799-890b-05f9e2229f7b","Type":"ContainerDied","Data":"9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243"} Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.875750 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kt4jw" event={"ID":"35f50b2c-cd40-4799-890b-05f9e2229f7b","Type":"ContainerDied","Data":"76cd32076d8dc012c1b59d633f92c2cafb00a93534bbeb6b6caaf1f379ab860c"} Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.875770 5016 scope.go:117] "RemoveContainer" containerID="9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.875802 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kt4jw" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.903331 5016 scope.go:117] "RemoveContainer" containerID="e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.928960 5016 scope.go:117] "RemoveContainer" containerID="97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.939268 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kt4jw"] Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.950043 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kt4jw"] Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.991188 5016 scope.go:117] "RemoveContainer" containerID="9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243" Oct 11 08:19:02 crc kubenswrapper[5016]: E1011 08:19:02.992056 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243\": container with ID starting with 9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243 not found: ID does not exist" containerID="9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.992157 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243"} err="failed to get container status \"9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243\": rpc error: code = NotFound desc = could not find container \"9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243\": container with ID starting with 9b7d829065f16d7c665961ae97476191f6beb4885a7a7bacc5ba9c72265ac243 not found: ID does not exist" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.992214 5016 scope.go:117] "RemoveContainer" containerID="e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280" Oct 11 08:19:02 crc kubenswrapper[5016]: E1011 08:19:02.992942 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280\": container with ID starting with e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280 not found: ID does not exist" containerID="e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.993075 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280"} err="failed to get container status \"e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280\": rpc error: code = NotFound desc = could not find container \"e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280\": container with ID starting with e31eb60b9c0903d4a7a4b14476abe570d9ab183d148547affe5b19979a762280 not found: ID does not exist" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.993159 5016 scope.go:117] "RemoveContainer" containerID="97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298" Oct 11 08:19:02 crc kubenswrapper[5016]: E1011 08:19:02.993707 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298\": container with ID starting with 97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298 not found: ID does not exist" containerID="97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298" Oct 11 08:19:02 crc kubenswrapper[5016]: I1011 08:19:02.993745 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298"} err="failed to get container status \"97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298\": rpc error: code = NotFound desc = could not find container \"97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298\": container with ID starting with 97cdfcbe5ebd65f6adbe614b10f5faeb6424366715f32c5b718af2d6b2bc1298 not found: ID does not exist" Oct 11 08:19:03 crc kubenswrapper[5016]: I1011 08:19:03.150208 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f50b2c-cd40-4799-890b-05f9e2229f7b" path="/var/lib/kubelet/pods/35f50b2c-cd40-4799-890b-05f9e2229f7b/volumes" Oct 11 08:19:13 crc kubenswrapper[5016]: I1011 08:19:13.007168 5016 generic.go:334] "Generic (PLEG): container finished" podID="40e86602-48de-424b-a248-ce46f60b770d" containerID="9fb78ad35d4dcc85f80e350d078133d821f988fcb5eb6bb87c4d8f609d145bd5" exitCode=0 Oct 11 08:19:13 crc kubenswrapper[5016]: I1011 08:19:13.007303 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" event={"ID":"40e86602-48de-424b-a248-ce46f60b770d","Type":"ContainerDied","Data":"9fb78ad35d4dcc85f80e350d078133d821f988fcb5eb6bb87c4d8f609d145bd5"} Oct 11 08:19:13 crc kubenswrapper[5016]: I1011 08:19:13.151424 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:19:13 crc kubenswrapper[5016]: E1011 08:19:13.152003 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.438920 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.553639 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-inventory-0\") pod \"40e86602-48de-424b-a248-ce46f60b770d\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.553777 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ceph\") pod \"40e86602-48de-424b-a248-ce46f60b770d\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.553881 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtk5z\" (UniqueName: \"kubernetes.io/projected/40e86602-48de-424b-a248-ce46f60b770d-kube-api-access-jtk5z\") pod \"40e86602-48de-424b-a248-ce46f60b770d\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.553961 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ssh-key-openstack-edpm-ipam\") pod \"40e86602-48de-424b-a248-ce46f60b770d\" (UID: \"40e86602-48de-424b-a248-ce46f60b770d\") " Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.561865 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e86602-48de-424b-a248-ce46f60b770d-kube-api-access-jtk5z" (OuterVolumeSpecName: "kube-api-access-jtk5z") pod "40e86602-48de-424b-a248-ce46f60b770d" (UID: "40e86602-48de-424b-a248-ce46f60b770d"). InnerVolumeSpecName "kube-api-access-jtk5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.563931 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ceph" (OuterVolumeSpecName: "ceph") pod "40e86602-48de-424b-a248-ce46f60b770d" (UID: "40e86602-48de-424b-a248-ce46f60b770d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.589852 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "40e86602-48de-424b-a248-ce46f60b770d" (UID: "40e86602-48de-424b-a248-ce46f60b770d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.604592 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "40e86602-48de-424b-a248-ce46f60b770d" (UID: "40e86602-48de-424b-a248-ce46f60b770d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.655889 5016 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.656364 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.656378 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtk5z\" (UniqueName: \"kubernetes.io/projected/40e86602-48de-424b-a248-ce46f60b770d-kube-api-access-jtk5z\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:14 crc kubenswrapper[5016]: I1011 08:19:14.656392 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/40e86602-48de-424b-a248-ce46f60b770d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.037066 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" event={"ID":"40e86602-48de-424b-a248-ce46f60b770d","Type":"ContainerDied","Data":"d00cb04bd59f6688ee2ea5e025e5be3a522843641f5c3b91bcb764f8d06b7429"} Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.037130 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d00cb04bd59f6688ee2ea5e025e5be3a522843641f5c3b91bcb764f8d06b7429" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.037199 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sv6h6" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.147062 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526"] Oct 11 08:19:15 crc kubenswrapper[5016]: E1011 08:19:15.147505 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerName="extract-utilities" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.147526 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerName="extract-utilities" Oct 11 08:19:15 crc kubenswrapper[5016]: E1011 08:19:15.147535 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerName="extract-content" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.147544 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerName="extract-content" Oct 11 08:19:15 crc kubenswrapper[5016]: E1011 08:19:15.147576 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e86602-48de-424b-a248-ce46f60b770d" containerName="ssh-known-hosts-edpm-deployment" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.147584 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e86602-48de-424b-a248-ce46f60b770d" containerName="ssh-known-hosts-edpm-deployment" Oct 11 08:19:15 crc kubenswrapper[5016]: E1011 08:19:15.147617 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerName="registry-server" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.147624 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerName="registry-server" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.147847 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f50b2c-cd40-4799-890b-05f9e2229f7b" containerName="registry-server" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.147870 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e86602-48de-424b-a248-ce46f60b770d" containerName="ssh-known-hosts-edpm-deployment" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.148767 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.152680 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.152892 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.153037 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.153249 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.153230 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526"] Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.153565 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.288296 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpv4\" (UniqueName: \"kubernetes.io/projected/9c4708ad-365b-46c1-a1ad-5945ff855420-kube-api-access-wcpv4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.288614 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.288786 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.288827 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.391082 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.391236 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.391284 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.391426 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcpv4\" (UniqueName: \"kubernetes.io/projected/9c4708ad-365b-46c1-a1ad-5945ff855420-kube-api-access-wcpv4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.398639 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.399943 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.407151 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.423194 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcpv4\" (UniqueName: \"kubernetes.io/projected/9c4708ad-365b-46c1-a1ad-5945ff855420-kube-api-access-wcpv4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-wc526\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:15 crc kubenswrapper[5016]: I1011 08:19:15.512282 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:16 crc kubenswrapper[5016]: I1011 08:19:16.088346 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526"] Oct 11 08:19:17 crc kubenswrapper[5016]: I1011 08:19:17.068184 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" event={"ID":"9c4708ad-365b-46c1-a1ad-5945ff855420","Type":"ContainerStarted","Data":"1ebb0ad570dcf6be768765b96342d60574f5925a25ee629bb6a67b39de19d7a0"} Oct 11 08:19:17 crc kubenswrapper[5016]: I1011 08:19:17.068635 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" event={"ID":"9c4708ad-365b-46c1-a1ad-5945ff855420","Type":"ContainerStarted","Data":"7fb1c1fa1ca6edd4a889385ac903f788c7051ced1f514c3b9c4a2a985540ac76"} Oct 11 08:19:17 crc kubenswrapper[5016]: I1011 08:19:17.097207 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" podStartSLOduration=1.467578952 podStartE2EDuration="2.097154603s" podCreationTimestamp="2025-10-11 08:19:15 +0000 UTC" firstStartedPulling="2025-10-11 08:19:16.10552333 +0000 UTC m=+2344.005979276" lastFinishedPulling="2025-10-11 08:19:16.735098941 +0000 UTC m=+2344.635554927" observedRunningTime="2025-10-11 08:19:17.093538768 +0000 UTC m=+2344.993994734" watchObservedRunningTime="2025-10-11 08:19:17.097154603 +0000 UTC m=+2344.997610549" Oct 11 08:19:26 crc kubenswrapper[5016]: I1011 08:19:26.133814 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:19:26 crc kubenswrapper[5016]: E1011 08:19:26.134986 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:19:26 crc kubenswrapper[5016]: I1011 08:19:26.175457 5016 generic.go:334] "Generic (PLEG): container finished" podID="9c4708ad-365b-46c1-a1ad-5945ff855420" containerID="1ebb0ad570dcf6be768765b96342d60574f5925a25ee629bb6a67b39de19d7a0" exitCode=0 Oct 11 08:19:26 crc kubenswrapper[5016]: I1011 08:19:26.175522 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" event={"ID":"9c4708ad-365b-46c1-a1ad-5945ff855420","Type":"ContainerDied","Data":"1ebb0ad570dcf6be768765b96342d60574f5925a25ee629bb6a67b39de19d7a0"} Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.659982 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.757741 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-inventory\") pod \"9c4708ad-365b-46c1-a1ad-5945ff855420\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.757960 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcpv4\" (UniqueName: \"kubernetes.io/projected/9c4708ad-365b-46c1-a1ad-5945ff855420-kube-api-access-wcpv4\") pod \"9c4708ad-365b-46c1-a1ad-5945ff855420\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.758078 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ceph\") pod \"9c4708ad-365b-46c1-a1ad-5945ff855420\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.758211 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ssh-key\") pod \"9c4708ad-365b-46c1-a1ad-5945ff855420\" (UID: \"9c4708ad-365b-46c1-a1ad-5945ff855420\") " Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.783057 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c4708ad-365b-46c1-a1ad-5945ff855420-kube-api-access-wcpv4" (OuterVolumeSpecName: "kube-api-access-wcpv4") pod "9c4708ad-365b-46c1-a1ad-5945ff855420" (UID: "9c4708ad-365b-46c1-a1ad-5945ff855420"). InnerVolumeSpecName "kube-api-access-wcpv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.783262 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ceph" (OuterVolumeSpecName: "ceph") pod "9c4708ad-365b-46c1-a1ad-5945ff855420" (UID: "9c4708ad-365b-46c1-a1ad-5945ff855420"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.833227 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-inventory" (OuterVolumeSpecName: "inventory") pod "9c4708ad-365b-46c1-a1ad-5945ff855420" (UID: "9c4708ad-365b-46c1-a1ad-5945ff855420"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.833891 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9c4708ad-365b-46c1-a1ad-5945ff855420" (UID: "9c4708ad-365b-46c1-a1ad-5945ff855420"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.862251 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.862303 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcpv4\" (UniqueName: \"kubernetes.io/projected/9c4708ad-365b-46c1-a1ad-5945ff855420-kube-api-access-wcpv4\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.862324 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:27 crc kubenswrapper[5016]: I1011 08:19:27.862342 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c4708ad-365b-46c1-a1ad-5945ff855420-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.198526 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" event={"ID":"9c4708ad-365b-46c1-a1ad-5945ff855420","Type":"ContainerDied","Data":"7fb1c1fa1ca6edd4a889385ac903f788c7051ced1f514c3b9c4a2a985540ac76"} Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.198610 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb1c1fa1ca6edd4a889385ac903f788c7051ced1f514c3b9c4a2a985540ac76" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.198693 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-wc526" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.348558 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6"] Oct 11 08:19:28 crc kubenswrapper[5016]: E1011 08:19:28.348928 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c4708ad-365b-46c1-a1ad-5945ff855420" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.348945 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c4708ad-365b-46c1-a1ad-5945ff855420" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.349118 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c4708ad-365b-46c1-a1ad-5945ff855420" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.349712 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.352379 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.352549 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.353251 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.353357 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.353963 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.363592 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6"] Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.489185 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.489711 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.489763 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtwh7\" (UniqueName: \"kubernetes.io/projected/f8062331-8483-42c3-a3a9-7bc28a3b2d44-kube-api-access-gtwh7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.489869 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.591936 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtwh7\" (UniqueName: \"kubernetes.io/projected/f8062331-8483-42c3-a3a9-7bc28a3b2d44-kube-api-access-gtwh7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.592040 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.592087 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.592146 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.597032 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.597149 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.598415 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.616550 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtwh7\" (UniqueName: \"kubernetes.io/projected/f8062331-8483-42c3-a3a9-7bc28a3b2d44-kube-api-access-gtwh7\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:28 crc kubenswrapper[5016]: I1011 08:19:28.671137 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:29 crc kubenswrapper[5016]: I1011 08:19:29.281279 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6"] Oct 11 08:19:29 crc kubenswrapper[5016]: I1011 08:19:29.293249 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 08:19:30 crc kubenswrapper[5016]: I1011 08:19:30.223060 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" event={"ID":"f8062331-8483-42c3-a3a9-7bc28a3b2d44","Type":"ContainerStarted","Data":"e105affacf054a61c8bcb16bc86d190da638fd912a4464c988f480eddaacf50a"} Oct 11 08:19:30 crc kubenswrapper[5016]: I1011 08:19:30.223595 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" event={"ID":"f8062331-8483-42c3-a3a9-7bc28a3b2d44","Type":"ContainerStarted","Data":"0c86e5bda8521d6267d793c773f09312f720e9e162bab690659f1787ac2620ed"} Oct 11 08:19:30 crc kubenswrapper[5016]: I1011 08:19:30.277112 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" podStartSLOduration=1.842759605 podStartE2EDuration="2.277066852s" podCreationTimestamp="2025-10-11 08:19:28 +0000 UTC" firstStartedPulling="2025-10-11 08:19:29.292955728 +0000 UTC m=+2357.193411684" lastFinishedPulling="2025-10-11 08:19:29.727262985 +0000 UTC m=+2357.627718931" observedRunningTime="2025-10-11 08:19:30.271613497 +0000 UTC m=+2358.172069453" watchObservedRunningTime="2025-10-11 08:19:30.277066852 +0000 UTC m=+2358.177522808" Oct 11 08:19:37 crc kubenswrapper[5016]: I1011 08:19:37.133437 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:19:37 crc kubenswrapper[5016]: E1011 08:19:37.134571 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:19:41 crc kubenswrapper[5016]: I1011 08:19:41.352970 5016 generic.go:334] "Generic (PLEG): container finished" podID="f8062331-8483-42c3-a3a9-7bc28a3b2d44" containerID="e105affacf054a61c8bcb16bc86d190da638fd912a4464c988f480eddaacf50a" exitCode=0 Oct 11 08:19:41 crc kubenswrapper[5016]: I1011 08:19:41.353088 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" event={"ID":"f8062331-8483-42c3-a3a9-7bc28a3b2d44","Type":"ContainerDied","Data":"e105affacf054a61c8bcb16bc86d190da638fd912a4464c988f480eddaacf50a"} Oct 11 08:19:42 crc kubenswrapper[5016]: I1011 08:19:42.837728 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:42 crc kubenswrapper[5016]: I1011 08:19:42.951824 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ceph\") pod \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " Oct 11 08:19:42 crc kubenswrapper[5016]: I1011 08:19:42.952021 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtwh7\" (UniqueName: \"kubernetes.io/projected/f8062331-8483-42c3-a3a9-7bc28a3b2d44-kube-api-access-gtwh7\") pod \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " Oct 11 08:19:42 crc kubenswrapper[5016]: I1011 08:19:42.952129 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ssh-key\") pod \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " Oct 11 08:19:42 crc kubenswrapper[5016]: I1011 08:19:42.952164 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-inventory\") pod \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\" (UID: \"f8062331-8483-42c3-a3a9-7bc28a3b2d44\") " Oct 11 08:19:42 crc kubenswrapper[5016]: I1011 08:19:42.960140 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8062331-8483-42c3-a3a9-7bc28a3b2d44-kube-api-access-gtwh7" (OuterVolumeSpecName: "kube-api-access-gtwh7") pod "f8062331-8483-42c3-a3a9-7bc28a3b2d44" (UID: "f8062331-8483-42c3-a3a9-7bc28a3b2d44"). InnerVolumeSpecName "kube-api-access-gtwh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:19:42 crc kubenswrapper[5016]: I1011 08:19:42.960996 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ceph" (OuterVolumeSpecName: "ceph") pod "f8062331-8483-42c3-a3a9-7bc28a3b2d44" (UID: "f8062331-8483-42c3-a3a9-7bc28a3b2d44"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:19:42 crc kubenswrapper[5016]: I1011 08:19:42.981085 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-inventory" (OuterVolumeSpecName: "inventory") pod "f8062331-8483-42c3-a3a9-7bc28a3b2d44" (UID: "f8062331-8483-42c3-a3a9-7bc28a3b2d44"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:19:42 crc kubenswrapper[5016]: I1011 08:19:42.998579 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f8062331-8483-42c3-a3a9-7bc28a3b2d44" (UID: "f8062331-8483-42c3-a3a9-7bc28a3b2d44"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.054136 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtwh7\" (UniqueName: \"kubernetes.io/projected/f8062331-8483-42c3-a3a9-7bc28a3b2d44-kube-api-access-gtwh7\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.054165 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.054175 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.054186 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f8062331-8483-42c3-a3a9-7bc28a3b2d44-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.380899 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" event={"ID":"f8062331-8483-42c3-a3a9-7bc28a3b2d44","Type":"ContainerDied","Data":"0c86e5bda8521d6267d793c773f09312f720e9e162bab690659f1787ac2620ed"} Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.380983 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c86e5bda8521d6267d793c773f09312f720e9e162bab690659f1787ac2620ed" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.380994 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.519174 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv"] Oct 11 08:19:43 crc kubenswrapper[5016]: E1011 08:19:43.519672 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8062331-8483-42c3-a3a9-7bc28a3b2d44" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.519690 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8062331-8483-42c3-a3a9-7bc28a3b2d44" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.519909 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8062331-8483-42c3-a3a9-7bc28a3b2d44" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.520604 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.525022 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.525699 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.525710 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.525936 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.526110 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.526194 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.526239 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.526548 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.548302 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv"] Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.664343 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.665101 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.665178 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.665232 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.665460 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh8zv\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-kube-api-access-rh8zv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.665815 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.665935 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.666383 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.666426 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.666470 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.666540 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.666600 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.666730 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768219 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768263 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768282 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768309 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768329 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768352 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768383 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768442 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768470 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768492 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768512 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh8zv\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-kube-api-access-rh8zv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768555 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.768581 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.775241 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.775293 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.775864 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.776337 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.776742 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.776885 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.777302 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.780938 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.781454 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.782811 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.783469 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.787910 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.803278 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh8zv\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-kube-api-access-rh8zv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:43 crc kubenswrapper[5016]: I1011 08:19:43.849793 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:19:44 crc kubenswrapper[5016]: I1011 08:19:44.239720 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv"] Oct 11 08:19:44 crc kubenswrapper[5016]: I1011 08:19:44.396701 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" event={"ID":"b801ca03-9cd3-4ac0-9012-2116bd01f414","Type":"ContainerStarted","Data":"51eb01b29366dddd70dfa0ad08517a63fea74d8556b2f511e2b1a738326a9ad7"} Oct 11 08:19:45 crc kubenswrapper[5016]: I1011 08:19:45.410966 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" event={"ID":"b801ca03-9cd3-4ac0-9012-2116bd01f414","Type":"ContainerStarted","Data":"0d50afda7bccfecdb5f741a9329705f12d244df43c551603669994bfe82066d1"} Oct 11 08:19:45 crc kubenswrapper[5016]: I1011 08:19:45.447804 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" podStartSLOduration=1.976786293 podStartE2EDuration="2.447777373s" podCreationTimestamp="2025-10-11 08:19:43 +0000 UTC" firstStartedPulling="2025-10-11 08:19:44.242950041 +0000 UTC m=+2372.143405987" lastFinishedPulling="2025-10-11 08:19:44.713941121 +0000 UTC m=+2372.614397067" observedRunningTime="2025-10-11 08:19:45.438843026 +0000 UTC m=+2373.339298972" watchObservedRunningTime="2025-10-11 08:19:45.447777373 +0000 UTC m=+2373.348233319" Oct 11 08:19:50 crc kubenswrapper[5016]: I1011 08:19:50.133545 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:19:50 crc kubenswrapper[5016]: E1011 08:19:50.134091 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:20:03 crc kubenswrapper[5016]: I1011 08:20:03.145292 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:20:03 crc kubenswrapper[5016]: E1011 08:20:03.147003 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:20:14 crc kubenswrapper[5016]: I1011 08:20:14.134077 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:20:14 crc kubenswrapper[5016]: E1011 08:20:14.136054 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:20:24 crc kubenswrapper[5016]: I1011 08:20:24.879445 5016 generic.go:334] "Generic (PLEG): container finished" podID="b801ca03-9cd3-4ac0-9012-2116bd01f414" containerID="0d50afda7bccfecdb5f741a9329705f12d244df43c551603669994bfe82066d1" exitCode=0 Oct 11 08:20:24 crc kubenswrapper[5016]: I1011 08:20:24.879548 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" event={"ID":"b801ca03-9cd3-4ac0-9012-2116bd01f414","Type":"ContainerDied","Data":"0d50afda7bccfecdb5f741a9329705f12d244df43c551603669994bfe82066d1"} Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.421010 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.494448 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.494884 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ceph\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.495081 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-bootstrap-combined-ca-bundle\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.495250 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ovn-combined-ca-bundle\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.495489 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-neutron-metadata-combined-ca-bundle\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.495724 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-ovn-default-certs-0\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.495923 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ssh-key\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.496132 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rh8zv\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-kube-api-access-rh8zv\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.496292 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-libvirt-combined-ca-bundle\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.496464 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-repo-setup-combined-ca-bundle\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.496638 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-inventory\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.496834 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.496988 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-nova-combined-ca-bundle\") pod \"b801ca03-9cd3-4ac0-9012-2116bd01f414\" (UID: \"b801ca03-9cd3-4ac0-9012-2116bd01f414\") " Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.510752 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.516275 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.516303 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.516358 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.516412 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-kube-api-access-rh8zv" (OuterVolumeSpecName: "kube-api-access-rh8zv") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "kube-api-access-rh8zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.516422 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.516481 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.516465 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.516555 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.517157 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.518870 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ceph" (OuterVolumeSpecName: "ceph") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.547767 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.568786 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-inventory" (OuterVolumeSpecName: "inventory") pod "b801ca03-9cd3-4ac0-9012-2116bd01f414" (UID: "b801ca03-9cd3-4ac0-9012-2116bd01f414"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599139 5016 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599183 5016 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599195 5016 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599208 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599222 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599233 5016 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599244 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rh8zv\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-kube-api-access-rh8zv\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599255 5016 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599265 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599277 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599291 5016 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599301 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b801ca03-9cd3-4ac0-9012-2116bd01f414-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.599312 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b801ca03-9cd3-4ac0-9012-2116bd01f414-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.919859 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" event={"ID":"b801ca03-9cd3-4ac0-9012-2116bd01f414","Type":"ContainerDied","Data":"51eb01b29366dddd70dfa0ad08517a63fea74d8556b2f511e2b1a738326a9ad7"} Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.920330 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51eb01b29366dddd70dfa0ad08517a63fea74d8556b2f511e2b1a738326a9ad7" Oct 11 08:20:26 crc kubenswrapper[5016]: I1011 08:20:26.919985 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.060241 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6"] Oct 11 08:20:27 crc kubenswrapper[5016]: E1011 08:20:27.061176 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b801ca03-9cd3-4ac0-9012-2116bd01f414" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.061218 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b801ca03-9cd3-4ac0-9012-2116bd01f414" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.061607 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="b801ca03-9cd3-4ac0-9012-2116bd01f414" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.063045 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.066684 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.068484 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.068503 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.068617 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.068785 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.072396 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6"] Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.109841 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.110161 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r649k\" (UniqueName: \"kubernetes.io/projected/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-kube-api-access-r649k\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.110263 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.110483 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.138279 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:20:27 crc kubenswrapper[5016]: E1011 08:20:27.138967 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.213819 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.213963 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r649k\" (UniqueName: \"kubernetes.io/projected/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-kube-api-access-r649k\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.214020 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.214095 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.220810 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.222805 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.223689 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.247830 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r649k\" (UniqueName: \"kubernetes.io/projected/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-kube-api-access-r649k\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:27 crc kubenswrapper[5016]: I1011 08:20:27.421886 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:28 crc kubenswrapper[5016]: I1011 08:20:28.072947 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6"] Oct 11 08:20:28 crc kubenswrapper[5016]: I1011 08:20:28.946876 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" event={"ID":"67bcbe14-936f-4ec2-bcf4-3d3cf876245d","Type":"ContainerStarted","Data":"f785345cafd3a0aec6d7ce0aa4c9200ec6c57a52d8067a2aa84a3c24dda68a4f"} Oct 11 08:20:28 crc kubenswrapper[5016]: I1011 08:20:28.947637 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" event={"ID":"67bcbe14-936f-4ec2-bcf4-3d3cf876245d","Type":"ContainerStarted","Data":"7a941b8d86678383523e2a0f189a142ec7a236962fd32534c66e4ffabcea81d2"} Oct 11 08:20:28 crc kubenswrapper[5016]: I1011 08:20:28.977624 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" podStartSLOduration=1.420281412 podStartE2EDuration="1.977601728s" podCreationTimestamp="2025-10-11 08:20:27 +0000 UTC" firstStartedPulling="2025-10-11 08:20:28.082322828 +0000 UTC m=+2415.982778774" lastFinishedPulling="2025-10-11 08:20:28.639643104 +0000 UTC m=+2416.540099090" observedRunningTime="2025-10-11 08:20:28.97313348 +0000 UTC m=+2416.873589466" watchObservedRunningTime="2025-10-11 08:20:28.977601728 +0000 UTC m=+2416.878057684" Oct 11 08:20:36 crc kubenswrapper[5016]: I1011 08:20:36.036289 5016 generic.go:334] "Generic (PLEG): container finished" podID="67bcbe14-936f-4ec2-bcf4-3d3cf876245d" containerID="f785345cafd3a0aec6d7ce0aa4c9200ec6c57a52d8067a2aa84a3c24dda68a4f" exitCode=0 Oct 11 08:20:36 crc kubenswrapper[5016]: I1011 08:20:36.036426 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" event={"ID":"67bcbe14-936f-4ec2-bcf4-3d3cf876245d","Type":"ContainerDied","Data":"f785345cafd3a0aec6d7ce0aa4c9200ec6c57a52d8067a2aa84a3c24dda68a4f"} Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.517620 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.592530 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ssh-key\") pod \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.593281 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-inventory\") pod \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.593399 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ceph\") pod \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.593488 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r649k\" (UniqueName: \"kubernetes.io/projected/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-kube-api-access-r649k\") pod \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\" (UID: \"67bcbe14-936f-4ec2-bcf4-3d3cf876245d\") " Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.604357 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ceph" (OuterVolumeSpecName: "ceph") pod "67bcbe14-936f-4ec2-bcf4-3d3cf876245d" (UID: "67bcbe14-936f-4ec2-bcf4-3d3cf876245d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.604858 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-kube-api-access-r649k" (OuterVolumeSpecName: "kube-api-access-r649k") pod "67bcbe14-936f-4ec2-bcf4-3d3cf876245d" (UID: "67bcbe14-936f-4ec2-bcf4-3d3cf876245d"). InnerVolumeSpecName "kube-api-access-r649k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.640971 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "67bcbe14-936f-4ec2-bcf4-3d3cf876245d" (UID: "67bcbe14-936f-4ec2-bcf4-3d3cf876245d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.643326 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-inventory" (OuterVolumeSpecName: "inventory") pod "67bcbe14-936f-4ec2-bcf4-3d3cf876245d" (UID: "67bcbe14-936f-4ec2-bcf4-3d3cf876245d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.696232 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.696288 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.696310 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r649k\" (UniqueName: \"kubernetes.io/projected/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-kube-api-access-r649k\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:37 crc kubenswrapper[5016]: I1011 08:20:37.696334 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67bcbe14-936f-4ec2-bcf4-3d3cf876245d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.063410 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" event={"ID":"67bcbe14-936f-4ec2-bcf4-3d3cf876245d","Type":"ContainerDied","Data":"7a941b8d86678383523e2a0f189a142ec7a236962fd32534c66e4ffabcea81d2"} Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.063466 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a941b8d86678383523e2a0f189a142ec7a236962fd32534c66e4ffabcea81d2" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.063525 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.172626 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld"] Oct 11 08:20:38 crc kubenswrapper[5016]: E1011 08:20:38.173492 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67bcbe14-936f-4ec2-bcf4-3d3cf876245d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.173619 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="67bcbe14-936f-4ec2-bcf4-3d3cf876245d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.173962 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="67bcbe14-936f-4ec2-bcf4-3d3cf876245d" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.174745 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.177995 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.178388 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.179096 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.179627 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.180501 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.182198 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.189387 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld"] Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.236063 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.236337 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.236523 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.236616 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.236710 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9cp8\" (UniqueName: \"kubernetes.io/projected/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-kube-api-access-f9cp8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.236757 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.338950 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.339468 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.339617 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9cp8\" (UniqueName: \"kubernetes.io/projected/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-kube-api-access-f9cp8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.339750 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.339846 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.339924 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.341287 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.344948 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.345098 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.349529 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.357423 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.365474 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9cp8\" (UniqueName: \"kubernetes.io/projected/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-kube-api-access-f9cp8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vpxld\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:38 crc kubenswrapper[5016]: I1011 08:20:38.547719 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:20:39 crc kubenswrapper[5016]: I1011 08:20:39.184081 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld"] Oct 11 08:20:40 crc kubenswrapper[5016]: I1011 08:20:40.085685 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" event={"ID":"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b","Type":"ContainerStarted","Data":"a7f92c905633c08c61a8aae303be61cfdb352e8e63418984d674419d0a1861e3"} Oct 11 08:20:40 crc kubenswrapper[5016]: I1011 08:20:40.086168 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" event={"ID":"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b","Type":"ContainerStarted","Data":"c0650778eb3bcb0b92dd5e5a8da37090fec64bf579870e678fde21aeb235f5b0"} Oct 11 08:20:40 crc kubenswrapper[5016]: I1011 08:20:40.112852 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" podStartSLOduration=1.637424426 podStartE2EDuration="2.112821772s" podCreationTimestamp="2025-10-11 08:20:38 +0000 UTC" firstStartedPulling="2025-10-11 08:20:39.19862411 +0000 UTC m=+2427.099080056" lastFinishedPulling="2025-10-11 08:20:39.674021416 +0000 UTC m=+2427.574477402" observedRunningTime="2025-10-11 08:20:40.110339456 +0000 UTC m=+2428.010795412" watchObservedRunningTime="2025-10-11 08:20:40.112821772 +0000 UTC m=+2428.013277718" Oct 11 08:20:42 crc kubenswrapper[5016]: I1011 08:20:42.134882 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:20:42 crc kubenswrapper[5016]: E1011 08:20:42.135823 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:20:57 crc kubenswrapper[5016]: I1011 08:20:57.135108 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:20:57 crc kubenswrapper[5016]: E1011 08:20:57.153519 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:21:10 crc kubenswrapper[5016]: I1011 08:21:10.134177 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:21:10 crc kubenswrapper[5016]: I1011 08:21:10.481933 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"a1917a4e4704e002239e88121fbc4c5074fe869243f23c61e0f6e8ef2e29a073"} Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.334422 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rczs8"] Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.337753 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.353207 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rczs8"] Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.432055 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-catalog-content\") pod \"certified-operators-rczs8\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.432197 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zczwb\" (UniqueName: \"kubernetes.io/projected/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-kube-api-access-zczwb\") pod \"certified-operators-rczs8\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.432301 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-utilities\") pod \"certified-operators-rczs8\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.534853 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-catalog-content\") pod \"certified-operators-rczs8\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.534943 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zczwb\" (UniqueName: \"kubernetes.io/projected/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-kube-api-access-zczwb\") pod \"certified-operators-rczs8\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.535010 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-utilities\") pod \"certified-operators-rczs8\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.535589 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-catalog-content\") pod \"certified-operators-rczs8\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.535609 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-utilities\") pod \"certified-operators-rczs8\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.571619 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zczwb\" (UniqueName: \"kubernetes.io/projected/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-kube-api-access-zczwb\") pod \"certified-operators-rczs8\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:13 crc kubenswrapper[5016]: I1011 08:21:13.688322 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:14 crc kubenswrapper[5016]: I1011 08:21:14.250610 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rczs8"] Oct 11 08:21:14 crc kubenswrapper[5016]: I1011 08:21:14.521076 5016 generic.go:334] "Generic (PLEG): container finished" podID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerID="750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352" exitCode=0 Oct 11 08:21:14 crc kubenswrapper[5016]: I1011 08:21:14.521325 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rczs8" event={"ID":"e6eeaac1-1d42-4297-b8a8-e30fe22b698e","Type":"ContainerDied","Data":"750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352"} Oct 11 08:21:14 crc kubenswrapper[5016]: I1011 08:21:14.521498 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rczs8" event={"ID":"e6eeaac1-1d42-4297-b8a8-e30fe22b698e","Type":"ContainerStarted","Data":"d3fa110bcf316ce5ec8ba86c206035ab52fd409e56aa2d50c96e24525b422b6a"} Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.535958 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rczs8" event={"ID":"e6eeaac1-1d42-4297-b8a8-e30fe22b698e","Type":"ContainerStarted","Data":"eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a"} Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.725847 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vqcq8"] Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.730809 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.744094 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqcq8"] Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.890016 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-catalog-content\") pod \"redhat-marketplace-vqcq8\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.890079 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkmr9\" (UniqueName: \"kubernetes.io/projected/6053a0a1-2e89-4f38-93d7-d270a290d2d4-kube-api-access-bkmr9\") pod \"redhat-marketplace-vqcq8\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.890202 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-utilities\") pod \"redhat-marketplace-vqcq8\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.992849 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-catalog-content\") pod \"redhat-marketplace-vqcq8\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.992951 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkmr9\" (UniqueName: \"kubernetes.io/projected/6053a0a1-2e89-4f38-93d7-d270a290d2d4-kube-api-access-bkmr9\") pod \"redhat-marketplace-vqcq8\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.993000 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-utilities\") pod \"redhat-marketplace-vqcq8\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.994140 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-utilities\") pod \"redhat-marketplace-vqcq8\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:15 crc kubenswrapper[5016]: I1011 08:21:15.994692 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-catalog-content\") pod \"redhat-marketplace-vqcq8\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:16 crc kubenswrapper[5016]: I1011 08:21:16.020378 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkmr9\" (UniqueName: \"kubernetes.io/projected/6053a0a1-2e89-4f38-93d7-d270a290d2d4-kube-api-access-bkmr9\") pod \"redhat-marketplace-vqcq8\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:16 crc kubenswrapper[5016]: I1011 08:21:16.098037 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:16 crc kubenswrapper[5016]: I1011 08:21:16.547815 5016 generic.go:334] "Generic (PLEG): container finished" podID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerID="eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a" exitCode=0 Oct 11 08:21:16 crc kubenswrapper[5016]: I1011 08:21:16.547967 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rczs8" event={"ID":"e6eeaac1-1d42-4297-b8a8-e30fe22b698e","Type":"ContainerDied","Data":"eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a"} Oct 11 08:21:16 crc kubenswrapper[5016]: W1011 08:21:16.616876 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6053a0a1_2e89_4f38_93d7_d270a290d2d4.slice/crio-09fa51c730d611179cc89f8b40a8d6404413d9d86150a3238b23b95014b109f5 WatchSource:0}: Error finding container 09fa51c730d611179cc89f8b40a8d6404413d9d86150a3238b23b95014b109f5: Status 404 returned error can't find the container with id 09fa51c730d611179cc89f8b40a8d6404413d9d86150a3238b23b95014b109f5 Oct 11 08:21:16 crc kubenswrapper[5016]: I1011 08:21:16.617861 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqcq8"] Oct 11 08:21:17 crc kubenswrapper[5016]: I1011 08:21:17.563685 5016 generic.go:334] "Generic (PLEG): container finished" podID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerID="9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6" exitCode=0 Oct 11 08:21:17 crc kubenswrapper[5016]: I1011 08:21:17.563790 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqcq8" event={"ID":"6053a0a1-2e89-4f38-93d7-d270a290d2d4","Type":"ContainerDied","Data":"9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6"} Oct 11 08:21:17 crc kubenswrapper[5016]: I1011 08:21:17.564207 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqcq8" event={"ID":"6053a0a1-2e89-4f38-93d7-d270a290d2d4","Type":"ContainerStarted","Data":"09fa51c730d611179cc89f8b40a8d6404413d9d86150a3238b23b95014b109f5"} Oct 11 08:21:17 crc kubenswrapper[5016]: I1011 08:21:17.569369 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rczs8" event={"ID":"e6eeaac1-1d42-4297-b8a8-e30fe22b698e","Type":"ContainerStarted","Data":"d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd"} Oct 11 08:21:17 crc kubenswrapper[5016]: I1011 08:21:17.630176 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rczs8" podStartSLOduration=2.116574764 podStartE2EDuration="4.63014679s" podCreationTimestamp="2025-10-11 08:21:13 +0000 UTC" firstStartedPulling="2025-10-11 08:21:14.523561092 +0000 UTC m=+2462.424017038" lastFinishedPulling="2025-10-11 08:21:17.037133118 +0000 UTC m=+2464.937589064" observedRunningTime="2025-10-11 08:21:17.620247108 +0000 UTC m=+2465.520703074" watchObservedRunningTime="2025-10-11 08:21:17.63014679 +0000 UTC m=+2465.530602746" Oct 11 08:21:18 crc kubenswrapper[5016]: I1011 08:21:18.581521 5016 generic.go:334] "Generic (PLEG): container finished" podID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerID="10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5" exitCode=0 Oct 11 08:21:18 crc kubenswrapper[5016]: I1011 08:21:18.581587 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqcq8" event={"ID":"6053a0a1-2e89-4f38-93d7-d270a290d2d4","Type":"ContainerDied","Data":"10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5"} Oct 11 08:21:19 crc kubenswrapper[5016]: I1011 08:21:19.593361 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqcq8" event={"ID":"6053a0a1-2e89-4f38-93d7-d270a290d2d4","Type":"ContainerStarted","Data":"c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945"} Oct 11 08:21:19 crc kubenswrapper[5016]: I1011 08:21:19.653227 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vqcq8" podStartSLOduration=3.217199013 podStartE2EDuration="4.65319129s" podCreationTimestamp="2025-10-11 08:21:15 +0000 UTC" firstStartedPulling="2025-10-11 08:21:17.568143677 +0000 UTC m=+2465.468599633" lastFinishedPulling="2025-10-11 08:21:19.004135954 +0000 UTC m=+2466.904591910" observedRunningTime="2025-10-11 08:21:19.642263492 +0000 UTC m=+2467.542719448" watchObservedRunningTime="2025-10-11 08:21:19.65319129 +0000 UTC m=+2467.553647246" Oct 11 08:21:23 crc kubenswrapper[5016]: I1011 08:21:23.689206 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:23 crc kubenswrapper[5016]: I1011 08:21:23.690111 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:23 crc kubenswrapper[5016]: I1011 08:21:23.768950 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:24 crc kubenswrapper[5016]: I1011 08:21:24.758690 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:24 crc kubenswrapper[5016]: I1011 08:21:24.822589 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rczs8"] Oct 11 08:21:26 crc kubenswrapper[5016]: I1011 08:21:26.098189 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:26 crc kubenswrapper[5016]: I1011 08:21:26.098248 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:26 crc kubenswrapper[5016]: I1011 08:21:26.160454 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:26 crc kubenswrapper[5016]: I1011 08:21:26.720879 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rczs8" podUID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerName="registry-server" containerID="cri-o://d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd" gracePeriod=2 Oct 11 08:21:26 crc kubenswrapper[5016]: I1011 08:21:26.802823 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.295270 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.394445 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zczwb\" (UniqueName: \"kubernetes.io/projected/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-kube-api-access-zczwb\") pod \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.394559 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-utilities\") pod \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.394761 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-catalog-content\") pod \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\" (UID: \"e6eeaac1-1d42-4297-b8a8-e30fe22b698e\") " Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.395779 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-utilities" (OuterVolumeSpecName: "utilities") pod "e6eeaac1-1d42-4297-b8a8-e30fe22b698e" (UID: "e6eeaac1-1d42-4297-b8a8-e30fe22b698e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.404917 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-kube-api-access-zczwb" (OuterVolumeSpecName: "kube-api-access-zczwb") pod "e6eeaac1-1d42-4297-b8a8-e30fe22b698e" (UID: "e6eeaac1-1d42-4297-b8a8-e30fe22b698e"). InnerVolumeSpecName "kube-api-access-zczwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.448164 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6eeaac1-1d42-4297-b8a8-e30fe22b698e" (UID: "e6eeaac1-1d42-4297-b8a8-e30fe22b698e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.497171 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.497213 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zczwb\" (UniqueName: \"kubernetes.io/projected/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-kube-api-access-zczwb\") on node \"crc\" DevicePath \"\"" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.497230 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6eeaac1-1d42-4297-b8a8-e30fe22b698e-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.502093 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqcq8"] Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.736914 5016 generic.go:334] "Generic (PLEG): container finished" podID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerID="d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd" exitCode=0 Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.737340 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rczs8" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.737353 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rczs8" event={"ID":"e6eeaac1-1d42-4297-b8a8-e30fe22b698e","Type":"ContainerDied","Data":"d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd"} Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.737445 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rczs8" event={"ID":"e6eeaac1-1d42-4297-b8a8-e30fe22b698e","Type":"ContainerDied","Data":"d3fa110bcf316ce5ec8ba86c206035ab52fd409e56aa2d50c96e24525b422b6a"} Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.737481 5016 scope.go:117] "RemoveContainer" containerID="d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.780219 5016 scope.go:117] "RemoveContainer" containerID="eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.785893 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rczs8"] Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.792130 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rczs8"] Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.810158 5016 scope.go:117] "RemoveContainer" containerID="750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.862183 5016 scope.go:117] "RemoveContainer" containerID="d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd" Oct 11 08:21:27 crc kubenswrapper[5016]: E1011 08:21:27.871940 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd\": container with ID starting with d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd not found: ID does not exist" containerID="d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.872063 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd"} err="failed to get container status \"d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd\": rpc error: code = NotFound desc = could not find container \"d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd\": container with ID starting with d0329e7143b14a2acfb51bd9a2f6a6bfdee752f1fab3efecc5013d34cc293dcd not found: ID does not exist" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.872145 5016 scope.go:117] "RemoveContainer" containerID="eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a" Oct 11 08:21:27 crc kubenswrapper[5016]: E1011 08:21:27.872972 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a\": container with ID starting with eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a not found: ID does not exist" containerID="eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.873065 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a"} err="failed to get container status \"eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a\": rpc error: code = NotFound desc = could not find container \"eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a\": container with ID starting with eae21a4fb06c862df76d3b05b18453f89b1991b9b478af441166332adb83185a not found: ID does not exist" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.873299 5016 scope.go:117] "RemoveContainer" containerID="750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352" Oct 11 08:21:27 crc kubenswrapper[5016]: E1011 08:21:27.873856 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352\": container with ID starting with 750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352 not found: ID does not exist" containerID="750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352" Oct 11 08:21:27 crc kubenswrapper[5016]: I1011 08:21:27.873945 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352"} err="failed to get container status \"750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352\": rpc error: code = NotFound desc = could not find container \"750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352\": container with ID starting with 750720590d227d0b5b763d038baaa97be7463aa530d0f70eadb3b47b03540352 not found: ID does not exist" Oct 11 08:21:28 crc kubenswrapper[5016]: I1011 08:21:28.747813 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vqcq8" podUID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerName="registry-server" containerID="cri-o://c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945" gracePeriod=2 Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.151871 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" path="/var/lib/kubelet/pods/e6eeaac1-1d42-4297-b8a8-e30fe22b698e/volumes" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.323199 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.491171 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-utilities\") pod \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.491552 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-catalog-content\") pod \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.491832 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkmr9\" (UniqueName: \"kubernetes.io/projected/6053a0a1-2e89-4f38-93d7-d270a290d2d4-kube-api-access-bkmr9\") pod \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\" (UID: \"6053a0a1-2e89-4f38-93d7-d270a290d2d4\") " Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.492811 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-utilities" (OuterVolumeSpecName: "utilities") pod "6053a0a1-2e89-4f38-93d7-d270a290d2d4" (UID: "6053a0a1-2e89-4f38-93d7-d270a290d2d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.500253 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6053a0a1-2e89-4f38-93d7-d270a290d2d4-kube-api-access-bkmr9" (OuterVolumeSpecName: "kube-api-access-bkmr9") pod "6053a0a1-2e89-4f38-93d7-d270a290d2d4" (UID: "6053a0a1-2e89-4f38-93d7-d270a290d2d4"). InnerVolumeSpecName "kube-api-access-bkmr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.511327 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6053a0a1-2e89-4f38-93d7-d270a290d2d4" (UID: "6053a0a1-2e89-4f38-93d7-d270a290d2d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.594545 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.594948 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6053a0a1-2e89-4f38-93d7-d270a290d2d4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.595084 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkmr9\" (UniqueName: \"kubernetes.io/projected/6053a0a1-2e89-4f38-93d7-d270a290d2d4-kube-api-access-bkmr9\") on node \"crc\" DevicePath \"\"" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.761888 5016 generic.go:334] "Generic (PLEG): container finished" podID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerID="c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945" exitCode=0 Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.761961 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqcq8" event={"ID":"6053a0a1-2e89-4f38-93d7-d270a290d2d4","Type":"ContainerDied","Data":"c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945"} Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.762003 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vqcq8" event={"ID":"6053a0a1-2e89-4f38-93d7-d270a290d2d4","Type":"ContainerDied","Data":"09fa51c730d611179cc89f8b40a8d6404413d9d86150a3238b23b95014b109f5"} Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.762030 5016 scope.go:117] "RemoveContainer" containerID="c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.762222 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vqcq8" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.803009 5016 scope.go:117] "RemoveContainer" containerID="10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.808099 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqcq8"] Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.819680 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vqcq8"] Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.826343 5016 scope.go:117] "RemoveContainer" containerID="9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.867690 5016 scope.go:117] "RemoveContainer" containerID="c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945" Oct 11 08:21:29 crc kubenswrapper[5016]: E1011 08:21:29.868192 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945\": container with ID starting with c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945 not found: ID does not exist" containerID="c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.868274 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945"} err="failed to get container status \"c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945\": rpc error: code = NotFound desc = could not find container \"c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945\": container with ID starting with c12cf278fca6fd219cb34fb1c2e6a61c3fd103dbc75feda349d8780f40578945 not found: ID does not exist" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.868312 5016 scope.go:117] "RemoveContainer" containerID="10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5" Oct 11 08:21:29 crc kubenswrapper[5016]: E1011 08:21:29.868587 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5\": container with ID starting with 10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5 not found: ID does not exist" containerID="10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.868610 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5"} err="failed to get container status \"10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5\": rpc error: code = NotFound desc = could not find container \"10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5\": container with ID starting with 10e1b3d28269141ae7cc86415cc6aa62414c3d7b0cee50fe6cedd9b54b5a24f5 not found: ID does not exist" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.868626 5016 scope.go:117] "RemoveContainer" containerID="9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6" Oct 11 08:21:29 crc kubenswrapper[5016]: E1011 08:21:29.869043 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6\": container with ID starting with 9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6 not found: ID does not exist" containerID="9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6" Oct 11 08:21:29 crc kubenswrapper[5016]: I1011 08:21:29.869071 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6"} err="failed to get container status \"9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6\": rpc error: code = NotFound desc = could not find container \"9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6\": container with ID starting with 9abe35c0a7baf822e4844c3475ed1cdd3dbf0424733074dcfb6e324186de87f6 not found: ID does not exist" Oct 11 08:21:31 crc kubenswrapper[5016]: I1011 08:21:31.153012 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" path="/var/lib/kubelet/pods/6053a0a1-2e89-4f38-93d7-d270a290d2d4/volumes" Oct 11 08:22:02 crc kubenswrapper[5016]: I1011 08:22:02.092311 5016 generic.go:334] "Generic (PLEG): container finished" podID="5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" containerID="a7f92c905633c08c61a8aae303be61cfdb352e8e63418984d674419d0a1861e3" exitCode=0 Oct 11 08:22:02 crc kubenswrapper[5016]: I1011 08:22:02.092427 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" event={"ID":"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b","Type":"ContainerDied","Data":"a7f92c905633c08c61a8aae303be61cfdb352e8e63418984d674419d0a1861e3"} Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.632533 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.746549 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovncontroller-config-0\") pod \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.746707 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9cp8\" (UniqueName: \"kubernetes.io/projected/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-kube-api-access-f9cp8\") pod \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.746762 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovn-combined-ca-bundle\") pod \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.746810 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-inventory\") pod \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.746907 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ssh-key\") pod \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.746924 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ceph\") pod \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\" (UID: \"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b\") " Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.753025 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ceph" (OuterVolumeSpecName: "ceph") pod "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" (UID: "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.757192 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-kube-api-access-f9cp8" (OuterVolumeSpecName: "kube-api-access-f9cp8") pod "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" (UID: "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b"). InnerVolumeSpecName "kube-api-access-f9cp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.755131 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" (UID: "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.774671 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" (UID: "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.777739 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-inventory" (OuterVolumeSpecName: "inventory") pod "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" (UID: "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.798622 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" (UID: "5f8096c1-6a47-4cd2-828a-4d091b6c7f5b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.848633 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.848688 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.848703 5016 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.848714 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9cp8\" (UniqueName: \"kubernetes.io/projected/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-kube-api-access-f9cp8\") on node \"crc\" DevicePath \"\"" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.848726 5016 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:22:03 crc kubenswrapper[5016]: I1011 08:22:03.848735 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5f8096c1-6a47-4cd2-828a-4d091b6c7f5b-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.116256 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" event={"ID":"5f8096c1-6a47-4cd2-828a-4d091b6c7f5b","Type":"ContainerDied","Data":"c0650778eb3bcb0b92dd5e5a8da37090fec64bf579870e678fde21aeb235f5b0"} Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.116314 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vpxld" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.116326 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0650778eb3bcb0b92dd5e5a8da37090fec64bf579870e678fde21aeb235f5b0" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.259618 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s"] Oct 11 08:22:04 crc kubenswrapper[5016]: E1011 08:22:04.260068 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerName="registry-server" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260094 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerName="registry-server" Oct 11 08:22:04 crc kubenswrapper[5016]: E1011 08:22:04.260115 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerName="registry-server" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260124 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerName="registry-server" Oct 11 08:22:04 crc kubenswrapper[5016]: E1011 08:22:04.260146 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260156 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 11 08:22:04 crc kubenswrapper[5016]: E1011 08:22:04.260180 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerName="extract-utilities" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260188 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerName="extract-utilities" Oct 11 08:22:04 crc kubenswrapper[5016]: E1011 08:22:04.260198 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerName="extract-utilities" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260209 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerName="extract-utilities" Oct 11 08:22:04 crc kubenswrapper[5016]: E1011 08:22:04.260223 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerName="extract-content" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260231 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerName="extract-content" Oct 11 08:22:04 crc kubenswrapper[5016]: E1011 08:22:04.260240 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerName="extract-content" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260246 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerName="extract-content" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260408 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="6053a0a1-2e89-4f38-93d7-d270a290d2d4" containerName="registry-server" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260430 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f8096c1-6a47-4cd2-828a-4d091b6c7f5b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.260439 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6eeaac1-1d42-4297-b8a8-e30fe22b698e" containerName="registry-server" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.261048 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.263988 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.264082 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.264390 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.264967 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.265400 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.266705 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.270547 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.306102 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s"] Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.358926 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn2vd\" (UniqueName: \"kubernetes.io/projected/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-kube-api-access-tn2vd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.359027 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.359084 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.359289 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.359456 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.359731 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.359864 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.463224 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn2vd\" (UniqueName: \"kubernetes.io/projected/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-kube-api-access-tn2vd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.463350 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.463428 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.463503 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.463567 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.463627 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.463684 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.468151 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.468249 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.469429 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.469923 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.469997 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.470508 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.481752 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn2vd\" (UniqueName: \"kubernetes.io/projected/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-kube-api-access-tn2vd\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:04 crc kubenswrapper[5016]: I1011 08:22:04.593927 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:22:05 crc kubenswrapper[5016]: I1011 08:22:05.218055 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s"] Oct 11 08:22:05 crc kubenswrapper[5016]: W1011 08:22:05.218465 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod862bdbf2_3427_4d44_90c0_fa61d1a9b3ba.slice/crio-f831a4226725ebf0a1fb62f05727675263b4427ec9d36d9195b5c2e0746c035e WatchSource:0}: Error finding container f831a4226725ebf0a1fb62f05727675263b4427ec9d36d9195b5c2e0746c035e: Status 404 returned error can't find the container with id f831a4226725ebf0a1fb62f05727675263b4427ec9d36d9195b5c2e0746c035e Oct 11 08:22:06 crc kubenswrapper[5016]: I1011 08:22:06.138565 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" event={"ID":"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba","Type":"ContainerStarted","Data":"988443297f0da9d3cdc6eaaf839f72106abf48216ccf4c5847e598d557fe1e68"} Oct 11 08:22:06 crc kubenswrapper[5016]: I1011 08:22:06.139086 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" event={"ID":"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba","Type":"ContainerStarted","Data":"f831a4226725ebf0a1fb62f05727675263b4427ec9d36d9195b5c2e0746c035e"} Oct 11 08:22:06 crc kubenswrapper[5016]: I1011 08:22:06.158684 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" podStartSLOduration=1.6801676300000001 podStartE2EDuration="2.158642246s" podCreationTimestamp="2025-10-11 08:22:04 +0000 UTC" firstStartedPulling="2025-10-11 08:22:05.223880431 +0000 UTC m=+2513.124336407" lastFinishedPulling="2025-10-11 08:22:05.702355077 +0000 UTC m=+2513.602811023" observedRunningTime="2025-10-11 08:22:06.154776724 +0000 UTC m=+2514.055232710" watchObservedRunningTime="2025-10-11 08:22:06.158642246 +0000 UTC m=+2514.059098202" Oct 11 08:23:14 crc kubenswrapper[5016]: I1011 08:23:14.043714 5016 generic.go:334] "Generic (PLEG): container finished" podID="862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" containerID="988443297f0da9d3cdc6eaaf839f72106abf48216ccf4c5847e598d557fe1e68" exitCode=0 Oct 11 08:23:14 crc kubenswrapper[5016]: I1011 08:23:14.043811 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" event={"ID":"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba","Type":"ContainerDied","Data":"988443297f0da9d3cdc6eaaf839f72106abf48216ccf4c5847e598d557fe1e68"} Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.710144 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.752637 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-inventory\") pod \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.752805 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ssh-key\") pod \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.753049 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-metadata-combined-ca-bundle\") pod \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.753120 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn2vd\" (UniqueName: \"kubernetes.io/projected/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-kube-api-access-tn2vd\") pod \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.753307 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ceph\") pod \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.753362 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-nova-metadata-neutron-config-0\") pod \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.753438 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-ovn-metadata-agent-neutron-config-0\") pod \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\" (UID: \"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba\") " Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.768336 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" (UID: "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.769690 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-kube-api-access-tn2vd" (OuterVolumeSpecName: "kube-api-access-tn2vd") pod "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" (UID: "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba"). InnerVolumeSpecName "kube-api-access-tn2vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.792650 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ceph" (OuterVolumeSpecName: "ceph") pod "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" (UID: "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.811893 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" (UID: "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.812408 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" (UID: "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.813416 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" (UID: "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.823158 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-inventory" (OuterVolumeSpecName: "inventory") pod "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" (UID: "862bdbf2-3427-4d44-90c0-fa61d1a9b3ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.856492 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.856957 5016 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.857146 5016 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.857377 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.858724 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.858898 5016 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:23:15 crc kubenswrapper[5016]: I1011 08:23:15.859052 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn2vd\" (UniqueName: \"kubernetes.io/projected/862bdbf2-3427-4d44-90c0-fa61d1a9b3ba-kube-api-access-tn2vd\") on node \"crc\" DevicePath \"\"" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.084491 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" event={"ID":"862bdbf2-3427-4d44-90c0-fa61d1a9b3ba","Type":"ContainerDied","Data":"f831a4226725ebf0a1fb62f05727675263b4427ec9d36d9195b5c2e0746c035e"} Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.084640 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f831a4226725ebf0a1fb62f05727675263b4427ec9d36d9195b5c2e0746c035e" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.084890 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.195638 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s"] Oct 11 08:23:16 crc kubenswrapper[5016]: E1011 08:23:16.196940 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.196976 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.197297 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="862bdbf2-3427-4d44-90c0-fa61d1a9b3ba" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.198254 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.205409 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.205696 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.205871 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.206023 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.206380 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.215815 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.220005 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s"] Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.274093 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcqn4\" (UniqueName: \"kubernetes.io/projected/05ad8521-c18a-40bb-bb25-8a981f9009b4-kube-api-access-wcqn4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.276792 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.277025 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.277264 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.277349 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.277405 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.379376 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcqn4\" (UniqueName: \"kubernetes.io/projected/05ad8521-c18a-40bb-bb25-8a981f9009b4-kube-api-access-wcqn4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.379465 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.379526 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.379572 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.379599 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.379636 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.384320 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.386463 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.386498 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.388485 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.388635 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.406611 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcqn4\" (UniqueName: \"kubernetes.io/projected/05ad8521-c18a-40bb-bb25-8a981f9009b4-kube-api-access-wcqn4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.521313 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:23:16 crc kubenswrapper[5016]: I1011 08:23:16.969007 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s"] Oct 11 08:23:17 crc kubenswrapper[5016]: I1011 08:23:17.098708 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" event={"ID":"05ad8521-c18a-40bb-bb25-8a981f9009b4","Type":"ContainerStarted","Data":"3ed5cb0ff1d35601fbb07838b4340fff33f1d04ebeedfa29363a0af9a7648870"} Oct 11 08:23:18 crc kubenswrapper[5016]: I1011 08:23:18.112553 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" event={"ID":"05ad8521-c18a-40bb-bb25-8a981f9009b4","Type":"ContainerStarted","Data":"987084d33a4955d77ec94d7f283e89649da0c35e177d910b89e9118fb7d72f4c"} Oct 11 08:23:18 crc kubenswrapper[5016]: I1011 08:23:18.151598 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" podStartSLOduration=1.588288028 podStartE2EDuration="2.151571782s" podCreationTimestamp="2025-10-11 08:23:16 +0000 UTC" firstStartedPulling="2025-10-11 08:23:16.97805543 +0000 UTC m=+2584.878511386" lastFinishedPulling="2025-10-11 08:23:17.541339194 +0000 UTC m=+2585.441795140" observedRunningTime="2025-10-11 08:23:18.145423489 +0000 UTC m=+2586.045879435" watchObservedRunningTime="2025-10-11 08:23:18.151571782 +0000 UTC m=+2586.052027738" Oct 11 08:23:37 crc kubenswrapper[5016]: I1011 08:23:37.122577 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:23:37 crc kubenswrapper[5016]: I1011 08:23:37.123734 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:24:07 crc kubenswrapper[5016]: I1011 08:24:07.123057 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:24:07 crc kubenswrapper[5016]: I1011 08:24:07.123927 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:24:37 crc kubenswrapper[5016]: I1011 08:24:37.122894 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:24:37 crc kubenswrapper[5016]: I1011 08:24:37.124057 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:24:37 crc kubenswrapper[5016]: I1011 08:24:37.124157 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:24:37 crc kubenswrapper[5016]: I1011 08:24:37.125558 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1917a4e4704e002239e88121fbc4c5074fe869243f23c61e0f6e8ef2e29a073"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:24:37 crc kubenswrapper[5016]: I1011 08:24:37.125762 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://a1917a4e4704e002239e88121fbc4c5074fe869243f23c61e0f6e8ef2e29a073" gracePeriod=600 Oct 11 08:24:38 crc kubenswrapper[5016]: I1011 08:24:38.229258 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="a1917a4e4704e002239e88121fbc4c5074fe869243f23c61e0f6e8ef2e29a073" exitCode=0 Oct 11 08:24:38 crc kubenswrapper[5016]: I1011 08:24:38.229335 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"a1917a4e4704e002239e88121fbc4c5074fe869243f23c61e0f6e8ef2e29a073"} Oct 11 08:24:38 crc kubenswrapper[5016]: I1011 08:24:38.232232 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72"} Oct 11 08:24:38 crc kubenswrapper[5016]: I1011 08:24:38.232305 5016 scope.go:117] "RemoveContainer" containerID="8ad965838f64a65c7540a078f594825d0f1d5ba56391d354a51afb9af339aa65" Oct 11 08:26:37 crc kubenswrapper[5016]: I1011 08:26:37.122396 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:26:37 crc kubenswrapper[5016]: I1011 08:26:37.123230 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:27:07 crc kubenswrapper[5016]: I1011 08:27:07.122858 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:27:07 crc kubenswrapper[5016]: I1011 08:27:07.123931 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:27:37 crc kubenswrapper[5016]: I1011 08:27:37.122586 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:27:37 crc kubenswrapper[5016]: I1011 08:27:37.123838 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:27:37 crc kubenswrapper[5016]: I1011 08:27:37.123933 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:27:37 crc kubenswrapper[5016]: I1011 08:27:37.125463 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:27:37 crc kubenswrapper[5016]: I1011 08:27:37.125633 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" gracePeriod=600 Oct 11 08:27:37 crc kubenswrapper[5016]: E1011 08:27:37.269395 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:27:37 crc kubenswrapper[5016]: I1011 08:27:37.326901 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" exitCode=0 Oct 11 08:27:37 crc kubenswrapper[5016]: I1011 08:27:37.326961 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72"} Oct 11 08:27:37 crc kubenswrapper[5016]: I1011 08:27:37.328595 5016 scope.go:117] "RemoveContainer" containerID="a1917a4e4704e002239e88121fbc4c5074fe869243f23c61e0f6e8ef2e29a073" Oct 11 08:27:37 crc kubenswrapper[5016]: I1011 08:27:37.329784 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:27:37 crc kubenswrapper[5016]: E1011 08:27:37.330315 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:27:51 crc kubenswrapper[5016]: I1011 08:27:51.139387 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:27:51 crc kubenswrapper[5016]: E1011 08:27:51.140609 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:28:02 crc kubenswrapper[5016]: I1011 08:28:02.134418 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:28:02 crc kubenswrapper[5016]: E1011 08:28:02.135550 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:28:14 crc kubenswrapper[5016]: I1011 08:28:14.134319 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:28:14 crc kubenswrapper[5016]: E1011 08:28:14.135886 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:28:25 crc kubenswrapper[5016]: I1011 08:28:25.913481 5016 generic.go:334] "Generic (PLEG): container finished" podID="05ad8521-c18a-40bb-bb25-8a981f9009b4" containerID="987084d33a4955d77ec94d7f283e89649da0c35e177d910b89e9118fb7d72f4c" exitCode=0 Oct 11 08:28:25 crc kubenswrapper[5016]: I1011 08:28:25.913594 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" event={"ID":"05ad8521-c18a-40bb-bb25-8a981f9009b4","Type":"ContainerDied","Data":"987084d33a4955d77ec94d7f283e89649da0c35e177d910b89e9118fb7d72f4c"} Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.133742 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:28:27 crc kubenswrapper[5016]: E1011 08:28:27.134728 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.442338 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.500809 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-inventory\") pod \"05ad8521-c18a-40bb-bb25-8a981f9009b4\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.500992 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-combined-ca-bundle\") pod \"05ad8521-c18a-40bb-bb25-8a981f9009b4\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.501158 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcqn4\" (UniqueName: \"kubernetes.io/projected/05ad8521-c18a-40bb-bb25-8a981f9009b4-kube-api-access-wcqn4\") pod \"05ad8521-c18a-40bb-bb25-8a981f9009b4\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.501301 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-secret-0\") pod \"05ad8521-c18a-40bb-bb25-8a981f9009b4\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.501872 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ceph\") pod \"05ad8521-c18a-40bb-bb25-8a981f9009b4\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.501912 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ssh-key\") pod \"05ad8521-c18a-40bb-bb25-8a981f9009b4\" (UID: \"05ad8521-c18a-40bb-bb25-8a981f9009b4\") " Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.509216 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ad8521-c18a-40bb-bb25-8a981f9009b4-kube-api-access-wcqn4" (OuterVolumeSpecName: "kube-api-access-wcqn4") pod "05ad8521-c18a-40bb-bb25-8a981f9009b4" (UID: "05ad8521-c18a-40bb-bb25-8a981f9009b4"). InnerVolumeSpecName "kube-api-access-wcqn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.511356 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ceph" (OuterVolumeSpecName: "ceph") pod "05ad8521-c18a-40bb-bb25-8a981f9009b4" (UID: "05ad8521-c18a-40bb-bb25-8a981f9009b4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.527015 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "05ad8521-c18a-40bb-bb25-8a981f9009b4" (UID: "05ad8521-c18a-40bb-bb25-8a981f9009b4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.540948 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "05ad8521-c18a-40bb-bb25-8a981f9009b4" (UID: "05ad8521-c18a-40bb-bb25-8a981f9009b4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.545722 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-inventory" (OuterVolumeSpecName: "inventory") pod "05ad8521-c18a-40bb-bb25-8a981f9009b4" (UID: "05ad8521-c18a-40bb-bb25-8a981f9009b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.546182 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "05ad8521-c18a-40bb-bb25-8a981f9009b4" (UID: "05ad8521-c18a-40bb-bb25-8a981f9009b4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.604741 5016 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.604803 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.604820 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.604832 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.604846 5016 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ad8521-c18a-40bb-bb25-8a981f9009b4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.604862 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcqn4\" (UniqueName: \"kubernetes.io/projected/05ad8521-c18a-40bb-bb25-8a981f9009b4-kube-api-access-wcqn4\") on node \"crc\" DevicePath \"\"" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.939726 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" event={"ID":"05ad8521-c18a-40bb-bb25-8a981f9009b4","Type":"ContainerDied","Data":"3ed5cb0ff1d35601fbb07838b4340fff33f1d04ebeedfa29363a0af9a7648870"} Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.939800 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed5cb0ff1d35601fbb07838b4340fff33f1d04ebeedfa29363a0af9a7648870" Oct 11 08:28:27 crc kubenswrapper[5016]: I1011 08:28:27.939751 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.080995 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p"] Oct 11 08:28:28 crc kubenswrapper[5016]: E1011 08:28:28.081572 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ad8521-c18a-40bb-bb25-8a981f9009b4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.081598 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ad8521-c18a-40bb-bb25-8a981f9009b4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.081832 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ad8521-c18a-40bb-bb25-8a981f9009b4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.082632 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.085392 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.085479 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l8l9k" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.085742 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.086413 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.086634 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.086838 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.086998 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.087194 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.087381 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.097980 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p"] Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.219625 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.219723 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.219766 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.219808 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.219855 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxlq4\" (UniqueName: \"kubernetes.io/projected/3920c74b-a214-4f41-975a-5ec0db3c3212-kube-api-access-jxlq4\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.220910 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.220998 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.221270 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.222279 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.222369 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.222405 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.324406 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.324756 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxlq4\" (UniqueName: \"kubernetes.io/projected/3920c74b-a214-4f41-975a-5ec0db3c3212-kube-api-access-jxlq4\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.324954 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.325061 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.325209 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.325333 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.325446 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.325544 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.325712 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.325838 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.325259 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.325951 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.327097 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.330607 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.331364 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.331396 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.331872 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.332049 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.332094 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.332433 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.332964 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.347451 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxlq4\" (UniqueName: \"kubernetes.io/projected/3920c74b-a214-4f41-975a-5ec0db3c3212-kube-api-access-jxlq4\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:28 crc kubenswrapper[5016]: I1011 08:28:28.410214 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:28:29 crc kubenswrapper[5016]: I1011 08:28:29.002035 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p"] Oct 11 08:28:29 crc kubenswrapper[5016]: I1011 08:28:29.007399 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 08:28:29 crc kubenswrapper[5016]: I1011 08:28:29.967872 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" event={"ID":"3920c74b-a214-4f41-975a-5ec0db3c3212","Type":"ContainerStarted","Data":"276ec5ed175164540cbc509dbbace1deda13bf536d6685d060d1dcfe818d76e9"} Oct 11 08:28:29 crc kubenswrapper[5016]: I1011 08:28:29.968692 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" event={"ID":"3920c74b-a214-4f41-975a-5ec0db3c3212","Type":"ContainerStarted","Data":"2e58f16462a88d9f26dc3f6738788c30e1e222b49f3637c106206bc5bd5b773d"} Oct 11 08:28:30 crc kubenswrapper[5016]: I1011 08:28:30.008232 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" podStartSLOduration=1.593349481 podStartE2EDuration="2.008210758s" podCreationTimestamp="2025-10-11 08:28:28 +0000 UTC" firstStartedPulling="2025-10-11 08:28:29.007124815 +0000 UTC m=+2896.907580751" lastFinishedPulling="2025-10-11 08:28:29.421986072 +0000 UTC m=+2897.322442028" observedRunningTime="2025-10-11 08:28:30.000087181 +0000 UTC m=+2897.900543137" watchObservedRunningTime="2025-10-11 08:28:30.008210758 +0000 UTC m=+2897.908666704" Oct 11 08:28:39 crc kubenswrapper[5016]: I1011 08:28:39.134333 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:28:39 crc kubenswrapper[5016]: E1011 08:28:39.135318 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:28:53 crc kubenswrapper[5016]: I1011 08:28:53.145898 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:28:53 crc kubenswrapper[5016]: E1011 08:28:53.147763 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:29:05 crc kubenswrapper[5016]: I1011 08:29:05.134240 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:29:05 crc kubenswrapper[5016]: E1011 08:29:05.135647 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.134286 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:29:20 crc kubenswrapper[5016]: E1011 08:29:20.136120 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.355301 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mvntf"] Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.358104 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.372211 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvntf"] Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.410535 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-utilities\") pod \"community-operators-mvntf\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.410634 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t95rd\" (UniqueName: \"kubernetes.io/projected/da520389-f1bf-480b-a8d1-837404605a25-kube-api-access-t95rd\") pod \"community-operators-mvntf\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.410679 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-catalog-content\") pod \"community-operators-mvntf\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.513439 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-utilities\") pod \"community-operators-mvntf\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.513567 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t95rd\" (UniqueName: \"kubernetes.io/projected/da520389-f1bf-480b-a8d1-837404605a25-kube-api-access-t95rd\") pod \"community-operators-mvntf\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.513606 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-catalog-content\") pod \"community-operators-mvntf\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.514505 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-catalog-content\") pod \"community-operators-mvntf\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.514879 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-utilities\") pod \"community-operators-mvntf\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.559548 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t95rd\" (UniqueName: \"kubernetes.io/projected/da520389-f1bf-480b-a8d1-837404605a25-kube-api-access-t95rd\") pod \"community-operators-mvntf\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:20 crc kubenswrapper[5016]: I1011 08:29:20.690426 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:21 crc kubenswrapper[5016]: I1011 08:29:21.303548 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mvntf"] Oct 11 08:29:21 crc kubenswrapper[5016]: I1011 08:29:21.605045 5016 generic.go:334] "Generic (PLEG): container finished" podID="da520389-f1bf-480b-a8d1-837404605a25" containerID="f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee" exitCode=0 Oct 11 08:29:21 crc kubenswrapper[5016]: I1011 08:29:21.605133 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvntf" event={"ID":"da520389-f1bf-480b-a8d1-837404605a25","Type":"ContainerDied","Data":"f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee"} Oct 11 08:29:21 crc kubenswrapper[5016]: I1011 08:29:21.605572 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvntf" event={"ID":"da520389-f1bf-480b-a8d1-837404605a25","Type":"ContainerStarted","Data":"bc69cf910dbbd5aef46197d231a59c9fe8c0b378d54bea9e415beb486c565cdf"} Oct 11 08:29:22 crc kubenswrapper[5016]: I1011 08:29:22.617776 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvntf" event={"ID":"da520389-f1bf-480b-a8d1-837404605a25","Type":"ContainerStarted","Data":"8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a"} Oct 11 08:29:23 crc kubenswrapper[5016]: I1011 08:29:23.636477 5016 generic.go:334] "Generic (PLEG): container finished" podID="da520389-f1bf-480b-a8d1-837404605a25" containerID="8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a" exitCode=0 Oct 11 08:29:23 crc kubenswrapper[5016]: I1011 08:29:23.636593 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvntf" event={"ID":"da520389-f1bf-480b-a8d1-837404605a25","Type":"ContainerDied","Data":"8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a"} Oct 11 08:29:24 crc kubenswrapper[5016]: I1011 08:29:24.651714 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvntf" event={"ID":"da520389-f1bf-480b-a8d1-837404605a25","Type":"ContainerStarted","Data":"51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845"} Oct 11 08:29:24 crc kubenswrapper[5016]: I1011 08:29:24.689148 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mvntf" podStartSLOduration=2.2352293149999998 podStartE2EDuration="4.689104238s" podCreationTimestamp="2025-10-11 08:29:20 +0000 UTC" firstStartedPulling="2025-10-11 08:29:21.606937725 +0000 UTC m=+2949.507393661" lastFinishedPulling="2025-10-11 08:29:24.060812598 +0000 UTC m=+2951.961268584" observedRunningTime="2025-10-11 08:29:24.674092677 +0000 UTC m=+2952.574548663" watchObservedRunningTime="2025-10-11 08:29:24.689104238 +0000 UTC m=+2952.589560224" Oct 11 08:29:30 crc kubenswrapper[5016]: I1011 08:29:30.691819 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:30 crc kubenswrapper[5016]: I1011 08:29:30.692809 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:30 crc kubenswrapper[5016]: I1011 08:29:30.766923 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:30 crc kubenswrapper[5016]: I1011 08:29:30.853743 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:31 crc kubenswrapper[5016]: I1011 08:29:31.030193 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvntf"] Oct 11 08:29:32 crc kubenswrapper[5016]: I1011 08:29:32.753792 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mvntf" podUID="da520389-f1bf-480b-a8d1-837404605a25" containerName="registry-server" containerID="cri-o://51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845" gracePeriod=2 Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.145734 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:29:33 crc kubenswrapper[5016]: E1011 08:29:33.146625 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.284561 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.398105 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-catalog-content\") pod \"da520389-f1bf-480b-a8d1-837404605a25\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.398263 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-utilities\") pod \"da520389-f1bf-480b-a8d1-837404605a25\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.398585 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t95rd\" (UniqueName: \"kubernetes.io/projected/da520389-f1bf-480b-a8d1-837404605a25-kube-api-access-t95rd\") pod \"da520389-f1bf-480b-a8d1-837404605a25\" (UID: \"da520389-f1bf-480b-a8d1-837404605a25\") " Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.400151 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-utilities" (OuterVolumeSpecName: "utilities") pod "da520389-f1bf-480b-a8d1-837404605a25" (UID: "da520389-f1bf-480b-a8d1-837404605a25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.416168 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da520389-f1bf-480b-a8d1-837404605a25-kube-api-access-t95rd" (OuterVolumeSpecName: "kube-api-access-t95rd") pod "da520389-f1bf-480b-a8d1-837404605a25" (UID: "da520389-f1bf-480b-a8d1-837404605a25"). InnerVolumeSpecName "kube-api-access-t95rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.483389 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da520389-f1bf-480b-a8d1-837404605a25" (UID: "da520389-f1bf-480b-a8d1-837404605a25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.501634 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t95rd\" (UniqueName: \"kubernetes.io/projected/da520389-f1bf-480b-a8d1-837404605a25-kube-api-access-t95rd\") on node \"crc\" DevicePath \"\"" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.502325 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.502386 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da520389-f1bf-480b-a8d1-837404605a25-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.772157 5016 generic.go:334] "Generic (PLEG): container finished" podID="da520389-f1bf-480b-a8d1-837404605a25" containerID="51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845" exitCode=0 Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.772235 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvntf" event={"ID":"da520389-f1bf-480b-a8d1-837404605a25","Type":"ContainerDied","Data":"51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845"} Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.772283 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mvntf" event={"ID":"da520389-f1bf-480b-a8d1-837404605a25","Type":"ContainerDied","Data":"bc69cf910dbbd5aef46197d231a59c9fe8c0b378d54bea9e415beb486c565cdf"} Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.772313 5016 scope.go:117] "RemoveContainer" containerID="51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.773126 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mvntf" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.803836 5016 scope.go:117] "RemoveContainer" containerID="8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.844778 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mvntf"] Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.860920 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mvntf"] Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.864380 5016 scope.go:117] "RemoveContainer" containerID="f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.910157 5016 scope.go:117] "RemoveContainer" containerID="51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845" Oct 11 08:29:33 crc kubenswrapper[5016]: E1011 08:29:33.911070 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845\": container with ID starting with 51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845 not found: ID does not exist" containerID="51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.911156 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845"} err="failed to get container status \"51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845\": rpc error: code = NotFound desc = could not find container \"51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845\": container with ID starting with 51ceb2f30e31df34df34a57a08660599bfe700058c61c9dc18ed159fe02c2845 not found: ID does not exist" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.911202 5016 scope.go:117] "RemoveContainer" containerID="8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a" Oct 11 08:29:33 crc kubenswrapper[5016]: E1011 08:29:33.911706 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a\": container with ID starting with 8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a not found: ID does not exist" containerID="8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.911765 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a"} err="failed to get container status \"8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a\": rpc error: code = NotFound desc = could not find container \"8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a\": container with ID starting with 8345a2c1630c9d56b2f9474dfd6e7077649b899d2d1b5bfe7906729da4f13c2a not found: ID does not exist" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.911795 5016 scope.go:117] "RemoveContainer" containerID="f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee" Oct 11 08:29:33 crc kubenswrapper[5016]: E1011 08:29:33.912308 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee\": container with ID starting with f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee not found: ID does not exist" containerID="f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee" Oct 11 08:29:33 crc kubenswrapper[5016]: I1011 08:29:33.912394 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee"} err="failed to get container status \"f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee\": rpc error: code = NotFound desc = could not find container \"f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee\": container with ID starting with f9ad18edf176cd22ca1697d069ee0a526e93209e32f8b01e7312837f979290ee not found: ID does not exist" Oct 11 08:29:35 crc kubenswrapper[5016]: I1011 08:29:35.151088 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da520389-f1bf-480b-a8d1-837404605a25" path="/var/lib/kubelet/pods/da520389-f1bf-480b-a8d1-837404605a25/volumes" Oct 11 08:29:45 crc kubenswrapper[5016]: I1011 08:29:45.134972 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:29:45 crc kubenswrapper[5016]: E1011 08:29:45.136404 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:29:57 crc kubenswrapper[5016]: I1011 08:29:57.134015 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:29:57 crc kubenswrapper[5016]: E1011 08:29:57.135481 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.220089 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6"] Oct 11 08:30:00 crc kubenswrapper[5016]: E1011 08:30:00.221146 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da520389-f1bf-480b-a8d1-837404605a25" containerName="extract-content" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.221164 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="da520389-f1bf-480b-a8d1-837404605a25" containerName="extract-content" Oct 11 08:30:00 crc kubenswrapper[5016]: E1011 08:30:00.221184 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da520389-f1bf-480b-a8d1-837404605a25" containerName="extract-utilities" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.221193 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="da520389-f1bf-480b-a8d1-837404605a25" containerName="extract-utilities" Oct 11 08:30:00 crc kubenswrapper[5016]: E1011 08:30:00.221214 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da520389-f1bf-480b-a8d1-837404605a25" containerName="registry-server" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.221222 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="da520389-f1bf-480b-a8d1-837404605a25" containerName="registry-server" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.221477 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="da520389-f1bf-480b-a8d1-837404605a25" containerName="registry-server" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.222300 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.225485 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.225599 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.238394 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6"] Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.274815 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393c7fe6-f77e-45c2-bd5c-c3e762983abd-secret-volume\") pod \"collect-profiles-29336190-dhjp6\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.275235 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393c7fe6-f77e-45c2-bd5c-c3e762983abd-config-volume\") pod \"collect-profiles-29336190-dhjp6\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.275452 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg4j5\" (UniqueName: \"kubernetes.io/projected/393c7fe6-f77e-45c2-bd5c-c3e762983abd-kube-api-access-dg4j5\") pod \"collect-profiles-29336190-dhjp6\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.378345 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393c7fe6-f77e-45c2-bd5c-c3e762983abd-config-volume\") pod \"collect-profiles-29336190-dhjp6\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.378451 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg4j5\" (UniqueName: \"kubernetes.io/projected/393c7fe6-f77e-45c2-bd5c-c3e762983abd-kube-api-access-dg4j5\") pod \"collect-profiles-29336190-dhjp6\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.378558 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393c7fe6-f77e-45c2-bd5c-c3e762983abd-secret-volume\") pod \"collect-profiles-29336190-dhjp6\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.379676 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393c7fe6-f77e-45c2-bd5c-c3e762983abd-config-volume\") pod \"collect-profiles-29336190-dhjp6\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.387081 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393c7fe6-f77e-45c2-bd5c-c3e762983abd-secret-volume\") pod \"collect-profiles-29336190-dhjp6\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.400811 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg4j5\" (UniqueName: \"kubernetes.io/projected/393c7fe6-f77e-45c2-bd5c-c3e762983abd-kube-api-access-dg4j5\") pod \"collect-profiles-29336190-dhjp6\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:00 crc kubenswrapper[5016]: I1011 08:30:00.562263 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:01 crc kubenswrapper[5016]: I1011 08:30:01.031957 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6"] Oct 11 08:30:01 crc kubenswrapper[5016]: I1011 08:30:01.100556 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" event={"ID":"393c7fe6-f77e-45c2-bd5c-c3e762983abd","Type":"ContainerStarted","Data":"c2ea65dc318070b7669d400d543ac4d3f79a4f92620f01f9ad346fdf4216bb5f"} Oct 11 08:30:02 crc kubenswrapper[5016]: I1011 08:30:02.115130 5016 generic.go:334] "Generic (PLEG): container finished" podID="393c7fe6-f77e-45c2-bd5c-c3e762983abd" containerID="1218b25b36ee6b53a71dca403264d622a0e7930d4f4029997bdfc2bf598ea74e" exitCode=0 Oct 11 08:30:02 crc kubenswrapper[5016]: I1011 08:30:02.115184 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" event={"ID":"393c7fe6-f77e-45c2-bd5c-c3e762983abd","Type":"ContainerDied","Data":"1218b25b36ee6b53a71dca403264d622a0e7930d4f4029997bdfc2bf598ea74e"} Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.535474 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.659633 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393c7fe6-f77e-45c2-bd5c-c3e762983abd-secret-volume\") pod \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.659855 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg4j5\" (UniqueName: \"kubernetes.io/projected/393c7fe6-f77e-45c2-bd5c-c3e762983abd-kube-api-access-dg4j5\") pod \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.659959 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393c7fe6-f77e-45c2-bd5c-c3e762983abd-config-volume\") pod \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\" (UID: \"393c7fe6-f77e-45c2-bd5c-c3e762983abd\") " Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.661520 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/393c7fe6-f77e-45c2-bd5c-c3e762983abd-config-volume" (OuterVolumeSpecName: "config-volume") pod "393c7fe6-f77e-45c2-bd5c-c3e762983abd" (UID: "393c7fe6-f77e-45c2-bd5c-c3e762983abd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.668751 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/393c7fe6-f77e-45c2-bd5c-c3e762983abd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "393c7fe6-f77e-45c2-bd5c-c3e762983abd" (UID: "393c7fe6-f77e-45c2-bd5c-c3e762983abd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.669398 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/393c7fe6-f77e-45c2-bd5c-c3e762983abd-kube-api-access-dg4j5" (OuterVolumeSpecName: "kube-api-access-dg4j5") pod "393c7fe6-f77e-45c2-bd5c-c3e762983abd" (UID: "393c7fe6-f77e-45c2-bd5c-c3e762983abd"). InnerVolumeSpecName "kube-api-access-dg4j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.763073 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/393c7fe6-f77e-45c2-bd5c-c3e762983abd-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.763117 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg4j5\" (UniqueName: \"kubernetes.io/projected/393c7fe6-f77e-45c2-bd5c-c3e762983abd-kube-api-access-dg4j5\") on node \"crc\" DevicePath \"\"" Oct 11 08:30:03 crc kubenswrapper[5016]: I1011 08:30:03.763147 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/393c7fe6-f77e-45c2-bd5c-c3e762983abd-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 08:30:04 crc kubenswrapper[5016]: I1011 08:30:04.142259 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" event={"ID":"393c7fe6-f77e-45c2-bd5c-c3e762983abd","Type":"ContainerDied","Data":"c2ea65dc318070b7669d400d543ac4d3f79a4f92620f01f9ad346fdf4216bb5f"} Oct 11 08:30:04 crc kubenswrapper[5016]: I1011 08:30:04.142695 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ea65dc318070b7669d400d543ac4d3f79a4f92620f01f9ad346fdf4216bb5f" Oct 11 08:30:04 crc kubenswrapper[5016]: I1011 08:30:04.142373 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6" Oct 11 08:30:04 crc kubenswrapper[5016]: I1011 08:30:04.645249 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc"] Oct 11 08:30:04 crc kubenswrapper[5016]: I1011 08:30:04.657290 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336145-7wvsc"] Oct 11 08:30:05 crc kubenswrapper[5016]: I1011 08:30:05.147060 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e89eee5-1535-47d4-bd90-c25541ec3e21" path="/var/lib/kubelet/pods/9e89eee5-1535-47d4-bd90-c25541ec3e21/volumes" Oct 11 08:30:09 crc kubenswrapper[5016]: I1011 08:30:09.133447 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:30:09 crc kubenswrapper[5016]: E1011 08:30:09.134476 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:30:20 crc kubenswrapper[5016]: I1011 08:30:20.133192 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:30:20 crc kubenswrapper[5016]: E1011 08:30:20.134035 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:30:31 crc kubenswrapper[5016]: I1011 08:30:31.133594 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:30:31 crc kubenswrapper[5016]: E1011 08:30:31.134818 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:30:41 crc kubenswrapper[5016]: I1011 08:30:41.124469 5016 scope.go:117] "RemoveContainer" containerID="41367f6de4591891fbad4112a9fb0a1cc57dfd25e0604cc6703447b16f28d65b" Oct 11 08:30:44 crc kubenswrapper[5016]: I1011 08:30:44.134258 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:30:44 crc kubenswrapper[5016]: E1011 08:30:44.135635 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:30:57 crc kubenswrapper[5016]: I1011 08:30:57.134955 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:30:57 crc kubenswrapper[5016]: E1011 08:30:57.136611 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:31:08 crc kubenswrapper[5016]: I1011 08:31:08.133846 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:31:08 crc kubenswrapper[5016]: E1011 08:31:08.135346 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:31:21 crc kubenswrapper[5016]: I1011 08:31:21.134441 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:31:21 crc kubenswrapper[5016]: E1011 08:31:21.136004 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:31:34 crc kubenswrapper[5016]: I1011 08:31:34.133963 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:31:34 crc kubenswrapper[5016]: E1011 08:31:34.134938 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:31:45 crc kubenswrapper[5016]: I1011 08:31:45.135268 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:31:45 crc kubenswrapper[5016]: E1011 08:31:45.136745 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:31:57 crc kubenswrapper[5016]: I1011 08:31:57.134773 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:31:57 crc kubenswrapper[5016]: E1011 08:31:57.136296 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.327413 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rrpq8"] Oct 11 08:31:58 crc kubenswrapper[5016]: E1011 08:31:58.327974 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="393c7fe6-f77e-45c2-bd5c-c3e762983abd" containerName="collect-profiles" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.327991 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="393c7fe6-f77e-45c2-bd5c-c3e762983abd" containerName="collect-profiles" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.328296 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="393c7fe6-f77e-45c2-bd5c-c3e762983abd" containerName="collect-profiles" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.330557 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.352297 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrpq8"] Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.492192 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-utilities\") pod \"redhat-marketplace-rrpq8\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.492265 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h29r6\" (UniqueName: \"kubernetes.io/projected/79f3d616-7844-4828-ace2-6c17aa054a81-kube-api-access-h29r6\") pod \"redhat-marketplace-rrpq8\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.492338 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-catalog-content\") pod \"redhat-marketplace-rrpq8\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.594588 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-utilities\") pod \"redhat-marketplace-rrpq8\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.594694 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h29r6\" (UniqueName: \"kubernetes.io/projected/79f3d616-7844-4828-ace2-6c17aa054a81-kube-api-access-h29r6\") pod \"redhat-marketplace-rrpq8\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.594798 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-catalog-content\") pod \"redhat-marketplace-rrpq8\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.595434 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-utilities\") pod \"redhat-marketplace-rrpq8\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.595523 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-catalog-content\") pod \"redhat-marketplace-rrpq8\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.630384 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h29r6\" (UniqueName: \"kubernetes.io/projected/79f3d616-7844-4828-ace2-6c17aa054a81-kube-api-access-h29r6\") pod \"redhat-marketplace-rrpq8\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:58 crc kubenswrapper[5016]: I1011 08:31:58.701967 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:31:59 crc kubenswrapper[5016]: I1011 08:31:59.213808 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrpq8"] Oct 11 08:31:59 crc kubenswrapper[5016]: W1011 08:31:59.228734 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f3d616_7844_4828_ace2_6c17aa054a81.slice/crio-018ddbb1c2f8f65c7140434047b2b66859aa0ed037114807ef52de867cb82991 WatchSource:0}: Error finding container 018ddbb1c2f8f65c7140434047b2b66859aa0ed037114807ef52de867cb82991: Status 404 returned error can't find the container with id 018ddbb1c2f8f65c7140434047b2b66859aa0ed037114807ef52de867cb82991 Oct 11 08:31:59 crc kubenswrapper[5016]: I1011 08:31:59.460554 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrpq8" event={"ID":"79f3d616-7844-4828-ace2-6c17aa054a81","Type":"ContainerStarted","Data":"018ddbb1c2f8f65c7140434047b2b66859aa0ed037114807ef52de867cb82991"} Oct 11 08:32:00 crc kubenswrapper[5016]: I1011 08:32:00.490271 5016 generic.go:334] "Generic (PLEG): container finished" podID="79f3d616-7844-4828-ace2-6c17aa054a81" containerID="fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350" exitCode=0 Oct 11 08:32:00 crc kubenswrapper[5016]: I1011 08:32:00.492995 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrpq8" event={"ID":"79f3d616-7844-4828-ace2-6c17aa054a81","Type":"ContainerDied","Data":"fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350"} Oct 11 08:32:01 crc kubenswrapper[5016]: I1011 08:32:01.508808 5016 generic.go:334] "Generic (PLEG): container finished" podID="79f3d616-7844-4828-ace2-6c17aa054a81" containerID="adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514" exitCode=0 Oct 11 08:32:01 crc kubenswrapper[5016]: I1011 08:32:01.508891 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrpq8" event={"ID":"79f3d616-7844-4828-ace2-6c17aa054a81","Type":"ContainerDied","Data":"adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514"} Oct 11 08:32:02 crc kubenswrapper[5016]: I1011 08:32:02.527510 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrpq8" event={"ID":"79f3d616-7844-4828-ace2-6c17aa054a81","Type":"ContainerStarted","Data":"f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be"} Oct 11 08:32:02 crc kubenswrapper[5016]: I1011 08:32:02.563240 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rrpq8" podStartSLOduration=3.1383287109999998 podStartE2EDuration="4.563203863s" podCreationTimestamp="2025-10-11 08:31:58 +0000 UTC" firstStartedPulling="2025-10-11 08:32:00.497424688 +0000 UTC m=+3108.397880674" lastFinishedPulling="2025-10-11 08:32:01.92229985 +0000 UTC m=+3109.822755826" observedRunningTime="2025-10-11 08:32:02.551715248 +0000 UTC m=+3110.452171264" watchObservedRunningTime="2025-10-11 08:32:02.563203863 +0000 UTC m=+3110.463659849" Oct 11 08:32:08 crc kubenswrapper[5016]: I1011 08:32:08.703609 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:32:08 crc kubenswrapper[5016]: I1011 08:32:08.704844 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:32:08 crc kubenswrapper[5016]: I1011 08:32:08.796236 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:32:09 crc kubenswrapper[5016]: I1011 08:32:09.692192 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:32:09 crc kubenswrapper[5016]: I1011 08:32:09.775081 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrpq8"] Oct 11 08:32:11 crc kubenswrapper[5016]: I1011 08:32:11.633232 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rrpq8" podUID="79f3d616-7844-4828-ace2-6c17aa054a81" containerName="registry-server" containerID="cri-o://f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be" gracePeriod=2 Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.137194 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:32:12 crc kubenswrapper[5016]: E1011 08:32:12.137573 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.207147 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.365632 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-utilities\") pod \"79f3d616-7844-4828-ace2-6c17aa054a81\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.366009 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h29r6\" (UniqueName: \"kubernetes.io/projected/79f3d616-7844-4828-ace2-6c17aa054a81-kube-api-access-h29r6\") pod \"79f3d616-7844-4828-ace2-6c17aa054a81\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.366056 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-catalog-content\") pod \"79f3d616-7844-4828-ace2-6c17aa054a81\" (UID: \"79f3d616-7844-4828-ace2-6c17aa054a81\") " Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.366753 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-utilities" (OuterVolumeSpecName: "utilities") pod "79f3d616-7844-4828-ace2-6c17aa054a81" (UID: "79f3d616-7844-4828-ace2-6c17aa054a81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.367000 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.380058 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f3d616-7844-4828-ace2-6c17aa054a81-kube-api-access-h29r6" (OuterVolumeSpecName: "kube-api-access-h29r6") pod "79f3d616-7844-4828-ace2-6c17aa054a81" (UID: "79f3d616-7844-4828-ace2-6c17aa054a81"). InnerVolumeSpecName "kube-api-access-h29r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.398184 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79f3d616-7844-4828-ace2-6c17aa054a81" (UID: "79f3d616-7844-4828-ace2-6c17aa054a81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.469970 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h29r6\" (UniqueName: \"kubernetes.io/projected/79f3d616-7844-4828-ace2-6c17aa054a81-kube-api-access-h29r6\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.470075 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79f3d616-7844-4828-ace2-6c17aa054a81-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.649305 5016 generic.go:334] "Generic (PLEG): container finished" podID="79f3d616-7844-4828-ace2-6c17aa054a81" containerID="f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be" exitCode=0 Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.649416 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrpq8" event={"ID":"79f3d616-7844-4828-ace2-6c17aa054a81","Type":"ContainerDied","Data":"f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be"} Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.649547 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrpq8" event={"ID":"79f3d616-7844-4828-ace2-6c17aa054a81","Type":"ContainerDied","Data":"018ddbb1c2f8f65c7140434047b2b66859aa0ed037114807ef52de867cb82991"} Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.649583 5016 scope.go:117] "RemoveContainer" containerID="f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.649601 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrpq8" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.690255 5016 scope.go:117] "RemoveContainer" containerID="adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.736894 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrpq8"] Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.743727 5016 scope.go:117] "RemoveContainer" containerID="fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.759401 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrpq8"] Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.780425 5016 scope.go:117] "RemoveContainer" containerID="f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be" Oct 11 08:32:12 crc kubenswrapper[5016]: E1011 08:32:12.781079 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be\": container with ID starting with f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be not found: ID does not exist" containerID="f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.781299 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be"} err="failed to get container status \"f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be\": rpc error: code = NotFound desc = could not find container \"f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be\": container with ID starting with f2f955de73746c1bfbcd6478d3b6c294d44379e2066578dd778db8b2041b35be not found: ID does not exist" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.781441 5016 scope.go:117] "RemoveContainer" containerID="adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514" Oct 11 08:32:12 crc kubenswrapper[5016]: E1011 08:32:12.782095 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514\": container with ID starting with adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514 not found: ID does not exist" containerID="adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.782194 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514"} err="failed to get container status \"adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514\": rpc error: code = NotFound desc = could not find container \"adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514\": container with ID starting with adbc0f503c1016bed3f38fc6883c30969dc36f72e752c7284daf2850ded84514 not found: ID does not exist" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.782231 5016 scope.go:117] "RemoveContainer" containerID="fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350" Oct 11 08:32:12 crc kubenswrapper[5016]: E1011 08:32:12.782919 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350\": container with ID starting with fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350 not found: ID does not exist" containerID="fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350" Oct 11 08:32:12 crc kubenswrapper[5016]: I1011 08:32:12.783046 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350"} err="failed to get container status \"fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350\": rpc error: code = NotFound desc = could not find container \"fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350\": container with ID starting with fd513d903844c03d87331f39e6627452395e06db374c73715b04488314099350 not found: ID does not exist" Oct 11 08:32:13 crc kubenswrapper[5016]: I1011 08:32:13.157607 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f3d616-7844-4828-ace2-6c17aa054a81" path="/var/lib/kubelet/pods/79f3d616-7844-4828-ace2-6c17aa054a81/volumes" Oct 11 08:32:24 crc kubenswrapper[5016]: I1011 08:32:24.133146 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:32:24 crc kubenswrapper[5016]: E1011 08:32:24.134835 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.401732 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4lcdv"] Oct 11 08:32:36 crc kubenswrapper[5016]: E1011 08:32:36.403224 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f3d616-7844-4828-ace2-6c17aa054a81" containerName="extract-content" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.403252 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f3d616-7844-4828-ace2-6c17aa054a81" containerName="extract-content" Oct 11 08:32:36 crc kubenswrapper[5016]: E1011 08:32:36.403307 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f3d616-7844-4828-ace2-6c17aa054a81" containerName="extract-utilities" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.403325 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f3d616-7844-4828-ace2-6c17aa054a81" containerName="extract-utilities" Oct 11 08:32:36 crc kubenswrapper[5016]: E1011 08:32:36.403355 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f3d616-7844-4828-ace2-6c17aa054a81" containerName="registry-server" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.403368 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f3d616-7844-4828-ace2-6c17aa054a81" containerName="registry-server" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.403752 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f3d616-7844-4828-ace2-6c17aa054a81" containerName="registry-server" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.406135 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.462897 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4lcdv"] Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.499234 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qxb\" (UniqueName: \"kubernetes.io/projected/0c8fcb18-2c15-4a89-802b-a8770fae003e-kube-api-access-69qxb\") pod \"certified-operators-4lcdv\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.499293 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-utilities\") pod \"certified-operators-4lcdv\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.499392 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-catalog-content\") pod \"certified-operators-4lcdv\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.601175 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-catalog-content\") pod \"certified-operators-4lcdv\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.601264 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69qxb\" (UniqueName: \"kubernetes.io/projected/0c8fcb18-2c15-4a89-802b-a8770fae003e-kube-api-access-69qxb\") pod \"certified-operators-4lcdv\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.601297 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-utilities\") pod \"certified-operators-4lcdv\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.601981 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-utilities\") pod \"certified-operators-4lcdv\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.602252 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-catalog-content\") pod \"certified-operators-4lcdv\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.629437 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69qxb\" (UniqueName: \"kubernetes.io/projected/0c8fcb18-2c15-4a89-802b-a8770fae003e-kube-api-access-69qxb\") pod \"certified-operators-4lcdv\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:36 crc kubenswrapper[5016]: I1011 08:32:36.751985 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:37 crc kubenswrapper[5016]: I1011 08:32:37.279739 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4lcdv"] Oct 11 08:32:38 crc kubenswrapper[5016]: I1011 08:32:38.015736 5016 generic.go:334] "Generic (PLEG): container finished" podID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerID="f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd" exitCode=0 Oct 11 08:32:38 crc kubenswrapper[5016]: I1011 08:32:38.015983 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lcdv" event={"ID":"0c8fcb18-2c15-4a89-802b-a8770fae003e","Type":"ContainerDied","Data":"f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd"} Oct 11 08:32:38 crc kubenswrapper[5016]: I1011 08:32:38.016405 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lcdv" event={"ID":"0c8fcb18-2c15-4a89-802b-a8770fae003e","Type":"ContainerStarted","Data":"7e9c41895125f257fb4f74bf4698fc45a444bc3f2caa2ef7e28939d6556a795d"} Oct 11 08:32:39 crc kubenswrapper[5016]: I1011 08:32:39.037973 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lcdv" event={"ID":"0c8fcb18-2c15-4a89-802b-a8770fae003e","Type":"ContainerStarted","Data":"886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f"} Oct 11 08:32:39 crc kubenswrapper[5016]: I1011 08:32:39.134575 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:32:40 crc kubenswrapper[5016]: I1011 08:32:40.061207 5016 generic.go:334] "Generic (PLEG): container finished" podID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerID="886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f" exitCode=0 Oct 11 08:32:40 crc kubenswrapper[5016]: I1011 08:32:40.061274 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lcdv" event={"ID":"0c8fcb18-2c15-4a89-802b-a8770fae003e","Type":"ContainerDied","Data":"886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f"} Oct 11 08:32:40 crc kubenswrapper[5016]: I1011 08:32:40.067435 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"62b966693cde380525833f9965a580c59298058b4e14e614272a3bd58f638ea3"} Oct 11 08:32:41 crc kubenswrapper[5016]: I1011 08:32:41.091889 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lcdv" event={"ID":"0c8fcb18-2c15-4a89-802b-a8770fae003e","Type":"ContainerStarted","Data":"27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571"} Oct 11 08:32:41 crc kubenswrapper[5016]: I1011 08:32:41.135461 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4lcdv" podStartSLOduration=2.5786350000000002 podStartE2EDuration="5.135415737s" podCreationTimestamp="2025-10-11 08:32:36 +0000 UTC" firstStartedPulling="2025-10-11 08:32:38.017773608 +0000 UTC m=+3145.918229554" lastFinishedPulling="2025-10-11 08:32:40.574554315 +0000 UTC m=+3148.475010291" observedRunningTime="2025-10-11 08:32:41.11473046 +0000 UTC m=+3149.015186426" watchObservedRunningTime="2025-10-11 08:32:41.135415737 +0000 UTC m=+3149.035871723" Oct 11 08:32:44 crc kubenswrapper[5016]: I1011 08:32:44.130214 5016 generic.go:334] "Generic (PLEG): container finished" podID="3920c74b-a214-4f41-975a-5ec0db3c3212" containerID="276ec5ed175164540cbc509dbbace1deda13bf536d6685d060d1dcfe818d76e9" exitCode=0 Oct 11 08:32:44 crc kubenswrapper[5016]: I1011 08:32:44.131169 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" event={"ID":"3920c74b-a214-4f41-975a-5ec0db3c3212","Type":"ContainerDied","Data":"276ec5ed175164540cbc509dbbace1deda13bf536d6685d060d1dcfe818d76e9"} Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.673775 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.833102 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-custom-ceph-combined-ca-bundle\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.833547 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-inventory\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.833717 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-1\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.833945 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxlq4\" (UniqueName: \"kubernetes.io/projected/3920c74b-a214-4f41-975a-5ec0db3c3212-kube-api-access-jxlq4\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.834227 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ssh-key\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.834432 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph-nova-0\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.834579 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-1\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.834751 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.834925 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-0\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.835062 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-extra-config-0\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.835697 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-0\") pod \"3920c74b-a214-4f41-975a-5ec0db3c3212\" (UID: \"3920c74b-a214-4f41-975a-5ec0db3c3212\") " Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.840821 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.843795 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph" (OuterVolumeSpecName: "ceph") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.845149 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3920c74b-a214-4f41-975a-5ec0db3c3212-kube-api-access-jxlq4" (OuterVolumeSpecName: "kube-api-access-jxlq4") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "kube-api-access-jxlq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.875167 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.876808 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.890854 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.891114 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.892825 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.899764 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-inventory" (OuterVolumeSpecName: "inventory") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.900376 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.902405 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "3920c74b-a214-4f41-975a-5ec0db3c3212" (UID: "3920c74b-a214-4f41-975a-5ec0db3c3212"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939131 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939178 5016 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939199 5016 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939215 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939228 5016 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939240 5016 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939253 5016 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939266 5016 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939280 5016 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-inventory\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939292 5016 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3920c74b-a214-4f41-975a-5ec0db3c3212-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:45 crc kubenswrapper[5016]: I1011 08:32:45.939304 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxlq4\" (UniqueName: \"kubernetes.io/projected/3920c74b-a214-4f41-975a-5ec0db3c3212-kube-api-access-jxlq4\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:46 crc kubenswrapper[5016]: I1011 08:32:46.158100 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" event={"ID":"3920c74b-a214-4f41-975a-5ec0db3c3212","Type":"ContainerDied","Data":"2e58f16462a88d9f26dc3f6738788c30e1e222b49f3637c106206bc5bd5b773d"} Oct 11 08:32:46 crc kubenswrapper[5016]: I1011 08:32:46.158158 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e58f16462a88d9f26dc3f6738788c30e1e222b49f3637c106206bc5bd5b773d" Oct 11 08:32:46 crc kubenswrapper[5016]: I1011 08:32:46.158177 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p" Oct 11 08:32:46 crc kubenswrapper[5016]: I1011 08:32:46.752430 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:46 crc kubenswrapper[5016]: I1011 08:32:46.752910 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:46 crc kubenswrapper[5016]: I1011 08:32:46.815145 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:47 crc kubenswrapper[5016]: I1011 08:32:47.229611 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:47 crc kubenswrapper[5016]: I1011 08:32:47.292978 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4lcdv"] Oct 11 08:32:49 crc kubenswrapper[5016]: I1011 08:32:49.191016 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4lcdv" podUID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerName="registry-server" containerID="cri-o://27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571" gracePeriod=2 Oct 11 08:32:49 crc kubenswrapper[5016]: I1011 08:32:49.762041 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:49 crc kubenswrapper[5016]: I1011 08:32:49.936534 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-catalog-content\") pod \"0c8fcb18-2c15-4a89-802b-a8770fae003e\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " Oct 11 08:32:49 crc kubenswrapper[5016]: I1011 08:32:49.937354 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69qxb\" (UniqueName: \"kubernetes.io/projected/0c8fcb18-2c15-4a89-802b-a8770fae003e-kube-api-access-69qxb\") pod \"0c8fcb18-2c15-4a89-802b-a8770fae003e\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " Oct 11 08:32:49 crc kubenswrapper[5016]: I1011 08:32:49.937439 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-utilities\") pod \"0c8fcb18-2c15-4a89-802b-a8770fae003e\" (UID: \"0c8fcb18-2c15-4a89-802b-a8770fae003e\") " Oct 11 08:32:49 crc kubenswrapper[5016]: I1011 08:32:49.939300 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-utilities" (OuterVolumeSpecName: "utilities") pod "0c8fcb18-2c15-4a89-802b-a8770fae003e" (UID: "0c8fcb18-2c15-4a89-802b-a8770fae003e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:32:49 crc kubenswrapper[5016]: I1011 08:32:49.948443 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c8fcb18-2c15-4a89-802b-a8770fae003e-kube-api-access-69qxb" (OuterVolumeSpecName: "kube-api-access-69qxb") pod "0c8fcb18-2c15-4a89-802b-a8770fae003e" (UID: "0c8fcb18-2c15-4a89-802b-a8770fae003e"). InnerVolumeSpecName "kube-api-access-69qxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.009787 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c8fcb18-2c15-4a89-802b-a8770fae003e" (UID: "0c8fcb18-2c15-4a89-802b-a8770fae003e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.041365 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69qxb\" (UniqueName: \"kubernetes.io/projected/0c8fcb18-2c15-4a89-802b-a8770fae003e-kube-api-access-69qxb\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.041427 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.041441 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c8fcb18-2c15-4a89-802b-a8770fae003e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.208612 5016 generic.go:334] "Generic (PLEG): container finished" podID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerID="27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571" exitCode=0 Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.208718 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lcdv" event={"ID":"0c8fcb18-2c15-4a89-802b-a8770fae003e","Type":"ContainerDied","Data":"27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571"} Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.208767 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4lcdv" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.208796 5016 scope.go:117] "RemoveContainer" containerID="27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.208774 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4lcdv" event={"ID":"0c8fcb18-2c15-4a89-802b-a8770fae003e","Type":"ContainerDied","Data":"7e9c41895125f257fb4f74bf4698fc45a444bc3f2caa2ef7e28939d6556a795d"} Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.243935 5016 scope.go:117] "RemoveContainer" containerID="886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.270949 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4lcdv"] Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.279951 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4lcdv"] Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.288884 5016 scope.go:117] "RemoveContainer" containerID="f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.340890 5016 scope.go:117] "RemoveContainer" containerID="27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571" Oct 11 08:32:50 crc kubenswrapper[5016]: E1011 08:32:50.341837 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571\": container with ID starting with 27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571 not found: ID does not exist" containerID="27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.341898 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571"} err="failed to get container status \"27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571\": rpc error: code = NotFound desc = could not find container \"27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571\": container with ID starting with 27fb8d4d61bc2618af42f101ce75ef40f2f60c7d3de816d56a561cd0b26db571 not found: ID does not exist" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.341932 5016 scope.go:117] "RemoveContainer" containerID="886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f" Oct 11 08:32:50 crc kubenswrapper[5016]: E1011 08:32:50.342692 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f\": container with ID starting with 886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f not found: ID does not exist" containerID="886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.342752 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f"} err="failed to get container status \"886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f\": rpc error: code = NotFound desc = could not find container \"886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f\": container with ID starting with 886dfddf76ac1bbd7bac4eade583344607ed726c20fcec591f4ae97d9d33116f not found: ID does not exist" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.342787 5016 scope.go:117] "RemoveContainer" containerID="f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd" Oct 11 08:32:50 crc kubenswrapper[5016]: E1011 08:32:50.343214 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd\": container with ID starting with f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd not found: ID does not exist" containerID="f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd" Oct 11 08:32:50 crc kubenswrapper[5016]: I1011 08:32:50.343250 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd"} err="failed to get container status \"f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd\": rpc error: code = NotFound desc = could not find container \"f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd\": container with ID starting with f87439d3b3eefd6298e8e6dcdfe2f98eae224bab9b1fde33c1300bd93d36cacd not found: ID does not exist" Oct 11 08:32:51 crc kubenswrapper[5016]: I1011 08:32:51.156788 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c8fcb18-2c15-4a89-802b-a8770fae003e" path="/var/lib/kubelet/pods/0c8fcb18-2c15-4a89-802b-a8770fae003e/volumes" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.228519 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 11 08:33:02 crc kubenswrapper[5016]: E1011 08:33:02.229779 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerName="extract-utilities" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.229797 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerName="extract-utilities" Oct 11 08:33:02 crc kubenswrapper[5016]: E1011 08:33:02.229820 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3920c74b-a214-4f41-975a-5ec0db3c3212" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.229828 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="3920c74b-a214-4f41-975a-5ec0db3c3212" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 11 08:33:02 crc kubenswrapper[5016]: E1011 08:33:02.229850 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerName="extract-content" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.229856 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerName="extract-content" Oct 11 08:33:02 crc kubenswrapper[5016]: E1011 08:33:02.229876 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerName="registry-server" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.229882 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerName="registry-server" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.230093 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="3920c74b-a214-4f41-975a-5ec0db3c3212" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.230120 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c8fcb18-2c15-4a89-802b-a8770fae003e" containerName="registry-server" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.231329 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.233905 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.236961 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.238559 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.240471 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.247789 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.248464 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.248514 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwvj\" (UniqueName: \"kubernetes.io/projected/f928618b-f291-4249-a756-0636b1680e66-kube-api-access-jjwvj\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.248551 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.248603 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.248620 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-run\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249119 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249188 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249247 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249272 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249295 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249336 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249361 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f928618b-f291-4249-a756-0636b1680e66-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249382 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249408 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249446 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.249465 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.256781 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.263808 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352544 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2b53eb06-1432-4059-9705-ffc917af76f7-ceph\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352597 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t4wl\" (UniqueName: \"kubernetes.io/projected/2b53eb06-1432-4059-9705-ffc917af76f7-kube-api-access-8t4wl\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352628 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-scripts\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352670 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352709 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352806 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-run\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352832 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352884 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352906 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352923 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-sys\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352939 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-lib-modules\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352963 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352987 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-dev\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.352971 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353062 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353401 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353445 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353572 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353686 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f928618b-f291-4249-a756-0636b1680e66-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353730 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353683 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-dev\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353805 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353973 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.353978 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354063 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354129 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354161 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354248 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-sys\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354257 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354324 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-config-data\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354382 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354708 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354812 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354847 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354927 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354959 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjwvj\" (UniqueName: \"kubernetes.io/projected/f928618b-f291-4249-a756-0636b1680e66-kube-api-access-jjwvj\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.354985 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.355043 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.355088 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.355108 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-run\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.355226 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-run\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.355275 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f928618b-f291-4249-a756-0636b1680e66-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.361585 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.362060 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f928618b-f291-4249-a756-0636b1680e66-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.363510 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.364018 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.366262 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f928618b-f291-4249-a756-0636b1680e66-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.372503 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjwvj\" (UniqueName: \"kubernetes.io/projected/f928618b-f291-4249-a756-0636b1680e66-kube-api-access-jjwvj\") pod \"cinder-volume-volume1-0\" (UID: \"f928618b-f291-4249-a756-0636b1680e66\") " pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.455912 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456067 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456287 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456447 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456525 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456564 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-config-data\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456674 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456703 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456761 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456791 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456880 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-etc-nvme\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.456940 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2b53eb06-1432-4059-9705-ffc917af76f7-ceph\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457056 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t4wl\" (UniqueName: \"kubernetes.io/projected/2b53eb06-1432-4059-9705-ffc917af76f7-kube-api-access-8t4wl\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457140 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-scripts\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457228 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-run\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457300 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457402 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-sys\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457479 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-lib-modules\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457614 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457696 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-run\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457677 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-sys\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.457622 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.458116 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-lib-modules\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.458129 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.458325 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-dev\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.458417 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2b53eb06-1432-4059-9705-ffc917af76f7-dev\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.461157 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-scripts\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.462243 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.462855 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-config-data\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.463904 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b53eb06-1432-4059-9705-ffc917af76f7-config-data-custom\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.465279 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2b53eb06-1432-4059-9705-ffc917af76f7-ceph\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.482145 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t4wl\" (UniqueName: \"kubernetes.io/projected/2b53eb06-1432-4059-9705-ffc917af76f7-kube-api-access-8t4wl\") pod \"cinder-backup-0\" (UID: \"2b53eb06-1432-4059-9705-ffc917af76f7\") " pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.556228 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.565113 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.856712 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-r2942"] Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.859062 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-r2942" Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.872234 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-r2942"] Oct 11 08:33:02 crc kubenswrapper[5016]: I1011 08:33:02.967838 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k76f\" (UniqueName: \"kubernetes.io/projected/aed6fb59-8a64-4859-9abe-acd0743490c6-kube-api-access-7k76f\") pod \"manila-db-create-r2942\" (UID: \"aed6fb59-8a64-4859-9abe-acd0743490c6\") " pod="openstack/manila-db-create-r2942" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.061947 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.064358 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.067368 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.067391 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.067559 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.067637 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-l2mcw" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.070183 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k76f\" (UniqueName: \"kubernetes.io/projected/aed6fb59-8a64-4859-9abe-acd0743490c6-kube-api-access-7k76f\") pod \"manila-db-create-r2942\" (UID: \"aed6fb59-8a64-4859-9abe-acd0743490c6\") " pod="openstack/manila-db-create-r2942" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.083606 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.089897 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k76f\" (UniqueName: \"kubernetes.io/projected/aed6fb59-8a64-4859-9abe-acd0743490c6-kube-api-access-7k76f\") pod \"manila-db-create-r2942\" (UID: \"aed6fb59-8a64-4859-9abe-acd0743490c6\") " pod="openstack/manila-db-create-r2942" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.172541 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.172611 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxcrf\" (UniqueName: \"kubernetes.io/projected/ff59b536-c3bd-477d-acdf-a3fdfccff379-kube-api-access-kxcrf\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.172688 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.172727 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ff59b536-c3bd-477d-acdf-a3fdfccff379-ceph\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.172769 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.172811 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff59b536-c3bd-477d-acdf-a3fdfccff379-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.172860 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff59b536-c3bd-477d-acdf-a3fdfccff379-logs\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.173020 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.173089 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.190755 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-r2942" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.233681 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.236817 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.241121 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.242674 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.248226 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.276734 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.276804 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxcrf\" (UniqueName: \"kubernetes.io/projected/ff59b536-c3bd-477d-acdf-a3fdfccff379-kube-api-access-kxcrf\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.276834 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.276855 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ff59b536-c3bd-477d-acdf-a3fdfccff379-ceph\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.276884 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.276996 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff59b536-c3bd-477d-acdf-a3fdfccff379-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.278356 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff59b536-c3bd-477d-acdf-a3fdfccff379-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.279786 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff59b536-c3bd-477d-acdf-a3fdfccff379-logs\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.280018 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.280139 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.280346 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff59b536-c3bd-477d-acdf-a3fdfccff379-logs\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.282824 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.282858 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-config-data\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.284347 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ff59b536-c3bd-477d-acdf-a3fdfccff379-ceph\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.287596 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.287996 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-scripts\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.301691 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff59b536-c3bd-477d-acdf-a3fdfccff379-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.307287 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxcrf\" (UniqueName: \"kubernetes.io/projected/ff59b536-c3bd-477d-acdf-a3fdfccff379-kube-api-access-kxcrf\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.314756 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.336870 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ff59b536-c3bd-477d-acdf-a3fdfccff379\") " pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.377269 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.382619 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.382678 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72403325-dc1b-43ab-9d1e-8c255ca43e5f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.382740 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72403325-dc1b-43ab-9d1e-8c255ca43e5f-logs\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.382820 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.382846 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72403325-dc1b-43ab-9d1e-8c255ca43e5f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.382884 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.382905 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.382937 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.382958 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h6l2\" (UniqueName: \"kubernetes.io/projected/72403325-dc1b-43ab-9d1e-8c255ca43e5f-kube-api-access-7h6l2\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.390147 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.396718 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f928618b-f291-4249-a756-0636b1680e66","Type":"ContainerStarted","Data":"79b8a492edb00143b969b731b28facd85cbcd619679a58e2cdeb5de16256b2f5"} Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.485364 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72403325-dc1b-43ab-9d1e-8c255ca43e5f-logs\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.485888 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.485919 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72403325-dc1b-43ab-9d1e-8c255ca43e5f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.485953 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.485982 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.486013 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.486041 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h6l2\" (UniqueName: \"kubernetes.io/projected/72403325-dc1b-43ab-9d1e-8c255ca43e5f-kube-api-access-7h6l2\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.486068 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.486086 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72403325-dc1b-43ab-9d1e-8c255ca43e5f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.486733 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/72403325-dc1b-43ab-9d1e-8c255ca43e5f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.487911 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72403325-dc1b-43ab-9d1e-8c255ca43e5f-logs\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.494185 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.502867 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/72403325-dc1b-43ab-9d1e-8c255ca43e5f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.503297 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.509341 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.510360 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.521837 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h6l2\" (UniqueName: \"kubernetes.io/projected/72403325-dc1b-43ab-9d1e-8c255ca43e5f-kube-api-access-7h6l2\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.523431 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72403325-dc1b-43ab-9d1e-8c255ca43e5f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.546470 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"72403325-dc1b-43ab-9d1e-8c255ca43e5f\") " pod="openstack/glance-default-internal-api-0" Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.772406 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.782062 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-r2942"] Oct 11 08:33:03 crc kubenswrapper[5016]: W1011 08:33:03.788350 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff59b536_c3bd_477d_acdf_a3fdfccff379.slice/crio-709bf4c0428bf3bbf09449a879e38d320eadd894ef6c105a58c3cec2551963d6 WatchSource:0}: Error finding container 709bf4c0428bf3bbf09449a879e38d320eadd894ef6c105a58c3cec2551963d6: Status 404 returned error can't find the container with id 709bf4c0428bf3bbf09449a879e38d320eadd894ef6c105a58c3cec2551963d6 Oct 11 08:33:03 crc kubenswrapper[5016]: W1011 08:33:03.790625 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaed6fb59_8a64_4859_9abe_acd0743490c6.slice/crio-9d3a693e4108ca1e9b01e223c3378dfa90d8de2ef822ab239716998c012e2dca WatchSource:0}: Error finding container 9d3a693e4108ca1e9b01e223c3378dfa90d8de2ef822ab239716998c012e2dca: Status 404 returned error can't find the container with id 9d3a693e4108ca1e9b01e223c3378dfa90d8de2ef822ab239716998c012e2dca Oct 11 08:33:03 crc kubenswrapper[5016]: I1011 08:33:03.861236 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:04 crc kubenswrapper[5016]: I1011 08:33:04.419283 5016 generic.go:334] "Generic (PLEG): container finished" podID="aed6fb59-8a64-4859-9abe-acd0743490c6" containerID="6bc1737711c9a83d75d3474926110f4170639840db91eb7179245e4d6945e50d" exitCode=0 Oct 11 08:33:04 crc kubenswrapper[5016]: I1011 08:33:04.419873 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-r2942" event={"ID":"aed6fb59-8a64-4859-9abe-acd0743490c6","Type":"ContainerDied","Data":"6bc1737711c9a83d75d3474926110f4170639840db91eb7179245e4d6945e50d"} Oct 11 08:33:04 crc kubenswrapper[5016]: I1011 08:33:04.421600 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-r2942" event={"ID":"aed6fb59-8a64-4859-9abe-acd0743490c6","Type":"ContainerStarted","Data":"9d3a693e4108ca1e9b01e223c3378dfa90d8de2ef822ab239716998c012e2dca"} Oct 11 08:33:04 crc kubenswrapper[5016]: I1011 08:33:04.423892 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2b53eb06-1432-4059-9705-ffc917af76f7","Type":"ContainerStarted","Data":"df12c10f1311cfaea3a6c8a87ba9c5b3ff935845bccc72dde8e00dac05af4b43"} Oct 11 08:33:04 crc kubenswrapper[5016]: I1011 08:33:04.434463 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff59b536-c3bd-477d-acdf-a3fdfccff379","Type":"ContainerStarted","Data":"709bf4c0428bf3bbf09449a879e38d320eadd894ef6c105a58c3cec2551963d6"} Oct 11 08:33:04 crc kubenswrapper[5016]: I1011 08:33:04.475752 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.464230 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f928618b-f291-4249-a756-0636b1680e66","Type":"ContainerStarted","Data":"188af45c7ffdcb0569cc257f2758f65c447b8f211f4c6d6cf418c1ed8ed5de75"} Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.466916 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f928618b-f291-4249-a756-0636b1680e66","Type":"ContainerStarted","Data":"3258572cd236398aefaffaa860219bc466c2f0c8d40852815ec22774bc883d53"} Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.472339 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2b53eb06-1432-4059-9705-ffc917af76f7","Type":"ContainerStarted","Data":"6a256269dc49b181707fb1c435774b0ee1ddef6e022fb1e880e864acbe95a1f9"} Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.472362 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2b53eb06-1432-4059-9705-ffc917af76f7","Type":"ContainerStarted","Data":"c4659800ec4031246103ddac7b65f784b0f8588c3ff837fef79d267236f56ef7"} Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.477133 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff59b536-c3bd-477d-acdf-a3fdfccff379","Type":"ContainerStarted","Data":"9b08044ebedae6b606d6ab539b481c282c3ba8722807dc37c46f22bed1251c6d"} Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.483910 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"72403325-dc1b-43ab-9d1e-8c255ca43e5f","Type":"ContainerStarted","Data":"dcb306ecb8dad1af07c5a3df56e60e6ce882c4883fe8b3bb555bf4b7cd80f89d"} Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.483942 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"72403325-dc1b-43ab-9d1e-8c255ca43e5f","Type":"ContainerStarted","Data":"1f71c9f453045963f6fc95c8dc7f506400215712fb0398037e86806e4115fd95"} Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.500524 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.443597279 podStartE2EDuration="3.500501537s" podCreationTimestamp="2025-10-11 08:33:02 +0000 UTC" firstStartedPulling="2025-10-11 08:33:03.326190069 +0000 UTC m=+3171.226646015" lastFinishedPulling="2025-10-11 08:33:04.383094327 +0000 UTC m=+3172.283550273" observedRunningTime="2025-10-11 08:33:05.495383762 +0000 UTC m=+3173.395839718" watchObservedRunningTime="2025-10-11 08:33:05.500501537 +0000 UTC m=+3173.400957483" Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.528683 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.536681004 podStartE2EDuration="3.528661303s" podCreationTimestamp="2025-10-11 08:33:02 +0000 UTC" firstStartedPulling="2025-10-11 08:33:03.3942158 +0000 UTC m=+3171.294671746" lastFinishedPulling="2025-10-11 08:33:04.386196099 +0000 UTC m=+3172.286652045" observedRunningTime="2025-10-11 08:33:05.521098113 +0000 UTC m=+3173.421554069" watchObservedRunningTime="2025-10-11 08:33:05.528661303 +0000 UTC m=+3173.429117249" Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.547502 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.547481672 podStartE2EDuration="3.547481672s" podCreationTimestamp="2025-10-11 08:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:33:05.542996283 +0000 UTC m=+3173.443452229" watchObservedRunningTime="2025-10-11 08:33:05.547481672 +0000 UTC m=+3173.447937618" Oct 11 08:33:05 crc kubenswrapper[5016]: I1011 08:33:05.924583 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-r2942" Oct 11 08:33:06 crc kubenswrapper[5016]: I1011 08:33:06.058741 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k76f\" (UniqueName: \"kubernetes.io/projected/aed6fb59-8a64-4859-9abe-acd0743490c6-kube-api-access-7k76f\") pod \"aed6fb59-8a64-4859-9abe-acd0743490c6\" (UID: \"aed6fb59-8a64-4859-9abe-acd0743490c6\") " Oct 11 08:33:06 crc kubenswrapper[5016]: I1011 08:33:06.069926 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed6fb59-8a64-4859-9abe-acd0743490c6-kube-api-access-7k76f" (OuterVolumeSpecName: "kube-api-access-7k76f") pod "aed6fb59-8a64-4859-9abe-acd0743490c6" (UID: "aed6fb59-8a64-4859-9abe-acd0743490c6"). InnerVolumeSpecName "kube-api-access-7k76f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:33:06 crc kubenswrapper[5016]: I1011 08:33:06.162077 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k76f\" (UniqueName: \"kubernetes.io/projected/aed6fb59-8a64-4859-9abe-acd0743490c6-kube-api-access-7k76f\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:06 crc kubenswrapper[5016]: I1011 08:33:06.498899 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-r2942" event={"ID":"aed6fb59-8a64-4859-9abe-acd0743490c6","Type":"ContainerDied","Data":"9d3a693e4108ca1e9b01e223c3378dfa90d8de2ef822ab239716998c012e2dca"} Oct 11 08:33:06 crc kubenswrapper[5016]: I1011 08:33:06.498956 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3a693e4108ca1e9b01e223c3378dfa90d8de2ef822ab239716998c012e2dca" Oct 11 08:33:06 crc kubenswrapper[5016]: I1011 08:33:06.498971 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-r2942" Oct 11 08:33:06 crc kubenswrapper[5016]: I1011 08:33:06.505042 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ff59b536-c3bd-477d-acdf-a3fdfccff379","Type":"ContainerStarted","Data":"0866112b5b4c6f72bcf2f27dc20f0ab876d0065574782923d5ff0388f3219676"} Oct 11 08:33:06 crc kubenswrapper[5016]: I1011 08:33:06.509539 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"72403325-dc1b-43ab-9d1e-8c255ca43e5f","Type":"ContainerStarted","Data":"8c3af86f0ba84e4996bc5daacafebf80e963b80bba444e7eae71e066288d6516"} Oct 11 08:33:06 crc kubenswrapper[5016]: I1011 08:33:06.549776 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.549745483 podStartE2EDuration="4.549745483s" podCreationTimestamp="2025-10-11 08:33:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:33:06.533450081 +0000 UTC m=+3174.433906037" watchObservedRunningTime="2025-10-11 08:33:06.549745483 +0000 UTC m=+3174.450201439" Oct 11 08:33:07 crc kubenswrapper[5016]: I1011 08:33:07.557052 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:07 crc kubenswrapper[5016]: I1011 08:33:07.566188 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Oct 11 08:33:12 crc kubenswrapper[5016]: I1011 08:33:12.813767 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Oct 11 08:33:12 crc kubenswrapper[5016]: I1011 08:33:12.859119 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Oct 11 08:33:12 crc kubenswrapper[5016]: I1011 08:33:12.947732 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-8c7a-account-create-2r7zt"] Oct 11 08:33:12 crc kubenswrapper[5016]: E1011 08:33:12.948414 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed6fb59-8a64-4859-9abe-acd0743490c6" containerName="mariadb-database-create" Oct 11 08:33:12 crc kubenswrapper[5016]: I1011 08:33:12.948484 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed6fb59-8a64-4859-9abe-acd0743490c6" containerName="mariadb-database-create" Oct 11 08:33:12 crc kubenswrapper[5016]: I1011 08:33:12.948769 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed6fb59-8a64-4859-9abe-acd0743490c6" containerName="mariadb-database-create" Oct 11 08:33:12 crc kubenswrapper[5016]: I1011 08:33:12.954260 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8c7a-account-create-2r7zt" Oct 11 08:33:12 crc kubenswrapper[5016]: I1011 08:33:12.961102 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Oct 11 08:33:12 crc kubenswrapper[5016]: I1011 08:33:12.970838 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-8c7a-account-create-2r7zt"] Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.073674 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5r77\" (UniqueName: \"kubernetes.io/projected/84f8322f-0668-4ed4-bacd-3b7b236fa51d-kube-api-access-h5r77\") pod \"manila-8c7a-account-create-2r7zt\" (UID: \"84f8322f-0668-4ed4-bacd-3b7b236fa51d\") " pod="openstack/manila-8c7a-account-create-2r7zt" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.176143 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5r77\" (UniqueName: \"kubernetes.io/projected/84f8322f-0668-4ed4-bacd-3b7b236fa51d-kube-api-access-h5r77\") pod \"manila-8c7a-account-create-2r7zt\" (UID: \"84f8322f-0668-4ed4-bacd-3b7b236fa51d\") " pod="openstack/manila-8c7a-account-create-2r7zt" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.207471 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5r77\" (UniqueName: \"kubernetes.io/projected/84f8322f-0668-4ed4-bacd-3b7b236fa51d-kube-api-access-h5r77\") pod \"manila-8c7a-account-create-2r7zt\" (UID: \"84f8322f-0668-4ed4-bacd-3b7b236fa51d\") " pod="openstack/manila-8c7a-account-create-2r7zt" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.282559 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8c7a-account-create-2r7zt" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.391312 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.391489 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.460046 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.473786 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.597301 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.597367 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.632075 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-8c7a-account-create-2r7zt"] Oct 11 08:33:13 crc kubenswrapper[5016]: W1011 08:33:13.644025 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84f8322f_0668_4ed4_bacd_3b7b236fa51d.slice/crio-9af3f5bed2f3034f30f5c99ca6653cf0a2ae8d63e861a90bedc484c3b22c725c WatchSource:0}: Error finding container 9af3f5bed2f3034f30f5c99ca6653cf0a2ae8d63e861a90bedc484c3b22c725c: Status 404 returned error can't find the container with id 9af3f5bed2f3034f30f5c99ca6653cf0a2ae8d63e861a90bedc484c3b22c725c Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.650575 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.862927 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.863420 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.926198 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:13 crc kubenswrapper[5016]: I1011 08:33:13.952758 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:14 crc kubenswrapper[5016]: I1011 08:33:14.605512 5016 generic.go:334] "Generic (PLEG): container finished" podID="84f8322f-0668-4ed4-bacd-3b7b236fa51d" containerID="655a35244433d6caae1d0cdfcf892ddae188023f6a35b1ad4f0620bb971c89c8" exitCode=0 Oct 11 08:33:14 crc kubenswrapper[5016]: I1011 08:33:14.608453 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-8c7a-account-create-2r7zt" event={"ID":"84f8322f-0668-4ed4-bacd-3b7b236fa51d","Type":"ContainerDied","Data":"655a35244433d6caae1d0cdfcf892ddae188023f6a35b1ad4f0620bb971c89c8"} Oct 11 08:33:14 crc kubenswrapper[5016]: I1011 08:33:14.608487 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-8c7a-account-create-2r7zt" event={"ID":"84f8322f-0668-4ed4-bacd-3b7b236fa51d","Type":"ContainerStarted","Data":"9af3f5bed2f3034f30f5c99ca6653cf0a2ae8d63e861a90bedc484c3b22c725c"} Oct 11 08:33:14 crc kubenswrapper[5016]: I1011 08:33:14.608504 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:14 crc kubenswrapper[5016]: I1011 08:33:14.609272 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.028837 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8c7a-account-create-2r7zt" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.043026 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5r77\" (UniqueName: \"kubernetes.io/projected/84f8322f-0668-4ed4-bacd-3b7b236fa51d-kube-api-access-h5r77\") pod \"84f8322f-0668-4ed4-bacd-3b7b236fa51d\" (UID: \"84f8322f-0668-4ed4-bacd-3b7b236fa51d\") " Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.054020 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f8322f-0668-4ed4-bacd-3b7b236fa51d-kube-api-access-h5r77" (OuterVolumeSpecName: "kube-api-access-h5r77") pod "84f8322f-0668-4ed4-bacd-3b7b236fa51d" (UID: "84f8322f-0668-4ed4-bacd-3b7b236fa51d"). InnerVolumeSpecName "kube-api-access-h5r77". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.145387 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5r77\" (UniqueName: \"kubernetes.io/projected/84f8322f-0668-4ed4-bacd-3b7b236fa51d-kube-api-access-h5r77\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.145467 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.145540 5016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.151710 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.626029 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-8c7a-account-create-2r7zt" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.626076 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-8c7a-account-create-2r7zt" event={"ID":"84f8322f-0668-4ed4-bacd-3b7b236fa51d","Type":"ContainerDied","Data":"9af3f5bed2f3034f30f5c99ca6653cf0a2ae8d63e861a90bedc484c3b22c725c"} Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.626103 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af3f5bed2f3034f30f5c99ca6653cf0a2ae8d63e861a90bedc484c3b22c725c" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.626150 5016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.626157 5016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.776077 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:16 crc kubenswrapper[5016]: I1011 08:33:16.780352 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.346160 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-gbkrg"] Oct 11 08:33:18 crc kubenswrapper[5016]: E1011 08:33:18.347791 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f8322f-0668-4ed4-bacd-3b7b236fa51d" containerName="mariadb-account-create" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.347821 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f8322f-0668-4ed4-bacd-3b7b236fa51d" containerName="mariadb-account-create" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.348224 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f8322f-0668-4ed4-bacd-3b7b236fa51d" containerName="mariadb-account-create" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.349365 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.355358 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.356178 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-kgpxq" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.363824 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-gbkrg"] Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.517505 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-job-config-data\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.517823 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-config-data\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.517976 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-combined-ca-bundle\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.518062 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7xd7\" (UniqueName: \"kubernetes.io/projected/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-kube-api-access-p7xd7\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.623592 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-combined-ca-bundle\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.623674 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7xd7\" (UniqueName: \"kubernetes.io/projected/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-kube-api-access-p7xd7\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.623758 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-job-config-data\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.623781 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-config-data\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.634711 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-combined-ca-bundle\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.647752 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7xd7\" (UniqueName: \"kubernetes.io/projected/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-kube-api-access-p7xd7\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.653813 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-config-data\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.654593 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-job-config-data\") pod \"manila-db-sync-gbkrg\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:18 crc kubenswrapper[5016]: I1011 08:33:18.697372 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:19 crc kubenswrapper[5016]: I1011 08:33:19.291154 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-gbkrg"] Oct 11 08:33:19 crc kubenswrapper[5016]: I1011 08:33:19.668614 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-gbkrg" event={"ID":"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a","Type":"ContainerStarted","Data":"32c358e921cf7b7c036c5d6dfe6955e9a2b87f423bed3f3f815422e137085fca"} Oct 11 08:33:24 crc kubenswrapper[5016]: I1011 08:33:24.734056 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-gbkrg" event={"ID":"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a","Type":"ContainerStarted","Data":"961393cdd1fa5ef442f33181139b0125a4b39705a5f9f109adf2341260a0336e"} Oct 11 08:33:24 crc kubenswrapper[5016]: I1011 08:33:24.787516 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-gbkrg" podStartSLOduration=2.169779798 podStartE2EDuration="6.78746618s" podCreationTimestamp="2025-10-11 08:33:18 +0000 UTC" firstStartedPulling="2025-10-11 08:33:19.303112088 +0000 UTC m=+3187.203568034" lastFinishedPulling="2025-10-11 08:33:23.92079847 +0000 UTC m=+3191.821254416" observedRunningTime="2025-10-11 08:33:24.775123383 +0000 UTC m=+3192.675579369" watchObservedRunningTime="2025-10-11 08:33:24.78746618 +0000 UTC m=+3192.687922186" Oct 11 08:33:34 crc kubenswrapper[5016]: I1011 08:33:34.861461 5016 generic.go:334] "Generic (PLEG): container finished" podID="7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a" containerID="961393cdd1fa5ef442f33181139b0125a4b39705a5f9f109adf2341260a0336e" exitCode=0 Oct 11 08:33:34 crc kubenswrapper[5016]: I1011 08:33:34.861753 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-gbkrg" event={"ID":"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a","Type":"ContainerDied","Data":"961393cdd1fa5ef442f33181139b0125a4b39705a5f9f109adf2341260a0336e"} Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.434069 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.629861 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-job-config-data\") pod \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.630265 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-config-data\") pod \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.630311 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-combined-ca-bundle\") pod \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.630431 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7xd7\" (UniqueName: \"kubernetes.io/projected/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-kube-api-access-p7xd7\") pod \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\" (UID: \"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a\") " Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.638386 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a" (UID: "7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.643816 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-kube-api-access-p7xd7" (OuterVolumeSpecName: "kube-api-access-p7xd7") pod "7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a" (UID: "7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a"). InnerVolumeSpecName "kube-api-access-p7xd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.648124 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-config-data" (OuterVolumeSpecName: "config-data") pod "7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a" (UID: "7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.693331 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a" (UID: "7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.733766 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.733827 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.733853 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7xd7\" (UniqueName: \"kubernetes.io/projected/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-kube-api-access-p7xd7\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.733872 5016 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a-job-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.893859 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-gbkrg" event={"ID":"7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a","Type":"ContainerDied","Data":"32c358e921cf7b7c036c5d6dfe6955e9a2b87f423bed3f3f815422e137085fca"} Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.893954 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c358e921cf7b7c036c5d6dfe6955e9a2b87f423bed3f3f815422e137085fca" Oct 11 08:33:36 crc kubenswrapper[5016]: I1011 08:33:36.893979 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-gbkrg" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.413568 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 11 08:33:37 crc kubenswrapper[5016]: E1011 08:33:37.414085 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a" containerName="manila-db-sync" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.414107 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a" containerName="manila-db-sync" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.414308 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a" containerName="manila-db-sync" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.415396 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.420604 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.421024 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-kgpxq" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.421185 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.421600 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.441560 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.519725 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.521791 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.527919 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.536641 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55d8975557-4nltc"] Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.541587 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564180 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564287 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-config\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564335 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szbjq\" (UniqueName: \"kubernetes.io/projected/111abe99-1817-4a1d-9a2e-a5973664c8d2-kube-api-access-szbjq\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564372 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564394 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564491 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-openstack-edpm-ipam\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564585 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-dns-svc\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564630 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f34230-64eb-429d-82fe-e1e15a3f6dfd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564733 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-ovsdbserver-sb\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564809 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564846 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-ovsdbserver-nb\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564866 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-ceph\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564899 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-scripts\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564940 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.564976 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-scripts\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.565045 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.565067 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvzbr\" (UniqueName: \"kubernetes.io/projected/68f34230-64eb-429d-82fe-e1e15a3f6dfd-kube-api-access-jvzbr\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.565118 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.565147 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcdwn\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-kube-api-access-vcdwn\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.565173 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.565234 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.580544 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55d8975557-4nltc"] Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666023 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-scripts\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666080 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvzbr\" (UniqueName: \"kubernetes.io/projected/68f34230-64eb-429d-82fe-e1e15a3f6dfd-kube-api-access-jvzbr\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666103 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666124 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666144 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcdwn\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-kube-api-access-vcdwn\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666168 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666193 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666226 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-config\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666244 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szbjq\" (UniqueName: \"kubernetes.io/projected/111abe99-1817-4a1d-9a2e-a5973664c8d2-kube-api-access-szbjq\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666263 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666284 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666320 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-openstack-edpm-ipam\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666378 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-dns-svc\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666400 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f34230-64eb-429d-82fe-e1e15a3f6dfd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666437 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-ovsdbserver-sb\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666471 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666613 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-ovsdbserver-nb\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666629 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-ceph\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666662 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-scripts\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.666686 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.667083 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.668042 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-ovsdbserver-sb\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.668626 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-openstack-edpm-ipam\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.668837 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.669181 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f34230-64eb-429d-82fe-e1e15a3f6dfd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.669566 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-config\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.670141 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-ovsdbserver-nb\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.671949 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111abe99-1817-4a1d-9a2e-a5973664c8d2-dns-svc\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.674216 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-ceph\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.676491 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-scripts\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.677271 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.677873 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.678244 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-scripts\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.678450 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.678891 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.681198 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.690392 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.695163 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvzbr\" (UniqueName: \"kubernetes.io/projected/68f34230-64eb-429d-82fe-e1e15a3f6dfd-kube-api-access-jvzbr\") pod \"manila-scheduler-0\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.699811 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcdwn\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-kube-api-access-vcdwn\") pod \"manila-share-share1-0\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.700217 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szbjq\" (UniqueName: \"kubernetes.io/projected/111abe99-1817-4a1d-9a2e-a5973664c8d2-kube-api-access-szbjq\") pod \"dnsmasq-dns-55d8975557-4nltc\" (UID: \"111abe99-1817-4a1d-9a2e-a5973664c8d2\") " pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.751641 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.820158 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.822413 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.826354 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.836027 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.866266 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.885221 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.978837 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75a2f5be-7ae8-4936-add8-c606e1fbab2d-etc-machine-id\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.979349 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.979446 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.979485 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-scripts\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.979516 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wt56\" (UniqueName: \"kubernetes.io/projected/75a2f5be-7ae8-4936-add8-c606e1fbab2d-kube-api-access-8wt56\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.979686 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data-custom\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:37 crc kubenswrapper[5016]: I1011 08:33:37.980026 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a2f5be-7ae8-4936-add8-c606e1fbab2d-logs\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.082761 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a2f5be-7ae8-4936-add8-c606e1fbab2d-logs\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.082852 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75a2f5be-7ae8-4936-add8-c606e1fbab2d-etc-machine-id\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.082893 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.082957 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.083010 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-scripts\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.083035 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wt56\" (UniqueName: \"kubernetes.io/projected/75a2f5be-7ae8-4936-add8-c606e1fbab2d-kube-api-access-8wt56\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.083060 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data-custom\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.083444 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a2f5be-7ae8-4936-add8-c606e1fbab2d-logs\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.084467 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75a2f5be-7ae8-4936-add8-c606e1fbab2d-etc-machine-id\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.088811 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-scripts\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.092263 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data-custom\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.093509 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.095504 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.105329 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wt56\" (UniqueName: \"kubernetes.io/projected/75a2f5be-7ae8-4936-add8-c606e1fbab2d-kube-api-access-8wt56\") pod \"manila-api-0\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.312706 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.411852 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 11 08:33:38 crc kubenswrapper[5016]: W1011 08:33:38.414857 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68f34230_64eb_429d_82fe_e1e15a3f6dfd.slice/crio-29bf16b3c790cc189da3570be79424feff36c689197853c4b083ab9e1bcb2a23 WatchSource:0}: Error finding container 29bf16b3c790cc189da3570be79424feff36c689197853c4b083ab9e1bcb2a23: Status 404 returned error can't find the container with id 29bf16b3c790cc189da3570be79424feff36c689197853c4b083ab9e1bcb2a23 Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.418692 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.530073 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55d8975557-4nltc"] Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.636785 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 11 08:33:38 crc kubenswrapper[5016]: W1011 08:33:38.639503 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbde3fad2_f81f_4252_90a9_9084a164a3bd.slice/crio-cc351c3f715d5e9fd16fb1a3ac9f62a0c8fcc5564b49be7dc5889669a692142f WatchSource:0}: Error finding container cc351c3f715d5e9fd16fb1a3ac9f62a0c8fcc5564b49be7dc5889669a692142f: Status 404 returned error can't find the container with id cc351c3f715d5e9fd16fb1a3ac9f62a0c8fcc5564b49be7dc5889669a692142f Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.918797 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"bde3fad2-f81f-4252-90a9-9084a164a3bd","Type":"ContainerStarted","Data":"cc351c3f715d5e9fd16fb1a3ac9f62a0c8fcc5564b49be7dc5889669a692142f"} Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.924417 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"68f34230-64eb-429d-82fe-e1e15a3f6dfd","Type":"ContainerStarted","Data":"29bf16b3c790cc189da3570be79424feff36c689197853c4b083ab9e1bcb2a23"} Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.927448 5016 generic.go:334] "Generic (PLEG): container finished" podID="111abe99-1817-4a1d-9a2e-a5973664c8d2" containerID="bc90b097e0d99cd82bbf00f28e5a1b2fa1f61da9cfbc70ccf2843630ce7027c2" exitCode=0 Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.927514 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55d8975557-4nltc" event={"ID":"111abe99-1817-4a1d-9a2e-a5973664c8d2","Type":"ContainerDied","Data":"bc90b097e0d99cd82bbf00f28e5a1b2fa1f61da9cfbc70ccf2843630ce7027c2"} Oct 11 08:33:38 crc kubenswrapper[5016]: I1011 08:33:38.927554 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55d8975557-4nltc" event={"ID":"111abe99-1817-4a1d-9a2e-a5973664c8d2","Type":"ContainerStarted","Data":"491342ff9484ce4008c3f41ec954d749d9e22c315c640bd9de07005f93188b2e"} Oct 11 08:33:39 crc kubenswrapper[5016]: I1011 08:33:39.032062 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 11 08:33:39 crc kubenswrapper[5016]: W1011 08:33:39.153380 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75a2f5be_7ae8_4936_add8_c606e1fbab2d.slice/crio-4c96909449035fd2ac193adfb2a77e798797bd90ed2af94668718c9f7bd01a96 WatchSource:0}: Error finding container 4c96909449035fd2ac193adfb2a77e798797bd90ed2af94668718c9f7bd01a96: Status 404 returned error can't find the container with id 4c96909449035fd2ac193adfb2a77e798797bd90ed2af94668718c9f7bd01a96 Oct 11 08:33:39 crc kubenswrapper[5016]: I1011 08:33:39.947727 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"68f34230-64eb-429d-82fe-e1e15a3f6dfd","Type":"ContainerStarted","Data":"4cfb6fdd065b8f88a657cf8387faac2942b2aff16400c75ebbe895f7365ac48a"} Oct 11 08:33:39 crc kubenswrapper[5016]: I1011 08:33:39.951011 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"75a2f5be-7ae8-4936-add8-c606e1fbab2d","Type":"ContainerStarted","Data":"04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba"} Oct 11 08:33:39 crc kubenswrapper[5016]: I1011 08:33:39.951050 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"75a2f5be-7ae8-4936-add8-c606e1fbab2d","Type":"ContainerStarted","Data":"4c96909449035fd2ac193adfb2a77e798797bd90ed2af94668718c9f7bd01a96"} Oct 11 08:33:39 crc kubenswrapper[5016]: I1011 08:33:39.969650 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55d8975557-4nltc" event={"ID":"111abe99-1817-4a1d-9a2e-a5973664c8d2","Type":"ContainerStarted","Data":"a5a1f091da82f6c9ee98edd67eca9fec1eb77267d6a7782f1b9608eb05b3feb6"} Oct 11 08:33:39 crc kubenswrapper[5016]: I1011 08:33:39.971419 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:39 crc kubenswrapper[5016]: I1011 08:33:39.998686 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55d8975557-4nltc" podStartSLOduration=2.998646113 podStartE2EDuration="2.998646113s" podCreationTimestamp="2025-10-11 08:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:33:39.997209965 +0000 UTC m=+3207.897665911" watchObservedRunningTime="2025-10-11 08:33:39.998646113 +0000 UTC m=+3207.899102059" Oct 11 08:33:40 crc kubenswrapper[5016]: I1011 08:33:40.498024 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.056502 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"68f34230-64eb-429d-82fe-e1e15a3f6dfd","Type":"ContainerStarted","Data":"38421d790aa1c7307a120598b87dcd146deb3b0f8ab3fc3334d364d245b6e89b"} Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.060403 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"75a2f5be-7ae8-4936-add8-c606e1fbab2d","Type":"ContainerStarted","Data":"6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57"} Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.061030 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerName="manila-api" containerID="cri-o://6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57" gracePeriod=30 Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.060998 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerName="manila-api-log" containerID="cri-o://04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba" gracePeriod=30 Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.090712 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.255426222 podStartE2EDuration="4.090687101s" podCreationTimestamp="2025-10-11 08:33:37 +0000 UTC" firstStartedPulling="2025-10-11 08:33:38.418440477 +0000 UTC m=+3206.318896423" lastFinishedPulling="2025-10-11 08:33:39.253701356 +0000 UTC m=+3207.154157302" observedRunningTime="2025-10-11 08:33:41.077250696 +0000 UTC m=+3208.977706662" watchObservedRunningTime="2025-10-11 08:33:41.090687101 +0000 UTC m=+3208.991143057" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.134239 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.133819584 podStartE2EDuration="4.133819584s" podCreationTimestamp="2025-10-11 08:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:33:41.102999088 +0000 UTC m=+3209.003455034" watchObservedRunningTime="2025-10-11 08:33:41.133819584 +0000 UTC m=+3209.034275530" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.779686 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.894353 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data-custom\") pod \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.894429 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-combined-ca-bundle\") pod \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.895142 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75a2f5be-7ae8-4936-add8-c606e1fbab2d-logs" (OuterVolumeSpecName: "logs") pod "75a2f5be-7ae8-4936-add8-c606e1fbab2d" (UID: "75a2f5be-7ae8-4936-add8-c606e1fbab2d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.894638 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a2f5be-7ae8-4936-add8-c606e1fbab2d-logs\") pod \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.895218 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data\") pod \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.895629 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wt56\" (UniqueName: \"kubernetes.io/projected/75a2f5be-7ae8-4936-add8-c606e1fbab2d-kube-api-access-8wt56\") pod \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.895713 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75a2f5be-7ae8-4936-add8-c606e1fbab2d-etc-machine-id\") pod \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.895759 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-scripts\") pod \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\" (UID: \"75a2f5be-7ae8-4936-add8-c606e1fbab2d\") " Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.896415 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75a2f5be-7ae8-4936-add8-c606e1fbab2d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "75a2f5be-7ae8-4936-add8-c606e1fbab2d" (UID: "75a2f5be-7ae8-4936-add8-c606e1fbab2d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.896955 5016 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/75a2f5be-7ae8-4936-add8-c606e1fbab2d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.896982 5016 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75a2f5be-7ae8-4936-add8-c606e1fbab2d-logs\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.905710 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-scripts" (OuterVolumeSpecName: "scripts") pod "75a2f5be-7ae8-4936-add8-c606e1fbab2d" (UID: "75a2f5be-7ae8-4936-add8-c606e1fbab2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.906406 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "75a2f5be-7ae8-4936-add8-c606e1fbab2d" (UID: "75a2f5be-7ae8-4936-add8-c606e1fbab2d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.906843 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a2f5be-7ae8-4936-add8-c606e1fbab2d-kube-api-access-8wt56" (OuterVolumeSpecName: "kube-api-access-8wt56") pod "75a2f5be-7ae8-4936-add8-c606e1fbab2d" (UID: "75a2f5be-7ae8-4936-add8-c606e1fbab2d"). InnerVolumeSpecName "kube-api-access-8wt56". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.936899 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75a2f5be-7ae8-4936-add8-c606e1fbab2d" (UID: "75a2f5be-7ae8-4936-add8-c606e1fbab2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:41 crc kubenswrapper[5016]: I1011 08:33:41.964859 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data" (OuterVolumeSpecName: "config-data") pod "75a2f5be-7ae8-4936-add8-c606e1fbab2d" (UID: "75a2f5be-7ae8-4936-add8-c606e1fbab2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:41.999717 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.000189 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wt56\" (UniqueName: \"kubernetes.io/projected/75a2f5be-7ae8-4936-add8-c606e1fbab2d-kube-api-access-8wt56\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.000204 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.000216 5016 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.000225 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a2f5be-7ae8-4936-add8-c606e1fbab2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.076598 5016 generic.go:334] "Generic (PLEG): container finished" podID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerID="6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57" exitCode=0 Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.076643 5016 generic.go:334] "Generic (PLEG): container finished" podID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerID="04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba" exitCode=143 Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.076710 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.076715 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"75a2f5be-7ae8-4936-add8-c606e1fbab2d","Type":"ContainerDied","Data":"6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57"} Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.076895 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"75a2f5be-7ae8-4936-add8-c606e1fbab2d","Type":"ContainerDied","Data":"04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba"} Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.076919 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"75a2f5be-7ae8-4936-add8-c606e1fbab2d","Type":"ContainerDied","Data":"4c96909449035fd2ac193adfb2a77e798797bd90ed2af94668718c9f7bd01a96"} Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.076940 5016 scope.go:117] "RemoveContainer" containerID="6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.137199 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.156286 5016 scope.go:117] "RemoveContainer" containerID="04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.161119 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.172753 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 11 08:33:42 crc kubenswrapper[5016]: E1011 08:33:42.173489 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerName="manila-api-log" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.173516 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerName="manila-api-log" Oct 11 08:33:42 crc kubenswrapper[5016]: E1011 08:33:42.173543 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerName="manila-api" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.173552 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerName="manila-api" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.173829 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerName="manila-api-log" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.173856 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" containerName="manila-api" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.175489 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.178887 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.184279 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.184902 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.185033 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.208736 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.208813 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-internal-tls-certs\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.208866 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-public-tls-certs\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.208929 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-config-data-custom\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.208952 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-config-data\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.209000 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfj9f\" (UniqueName: \"kubernetes.io/projected/d897f62c-8566-4445-8061-d77ce1ac2cd5-kube-api-access-kfj9f\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.209028 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d897f62c-8566-4445-8061-d77ce1ac2cd5-logs\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.209070 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-scripts\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.209096 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d897f62c-8566-4445-8061-d77ce1ac2cd5-etc-machine-id\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.251494 5016 scope.go:117] "RemoveContainer" containerID="6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57" Oct 11 08:33:42 crc kubenswrapper[5016]: E1011 08:33:42.253118 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57\": container with ID starting with 6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57 not found: ID does not exist" containerID="6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.253146 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57"} err="failed to get container status \"6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57\": rpc error: code = NotFound desc = could not find container \"6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57\": container with ID starting with 6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57 not found: ID does not exist" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.253169 5016 scope.go:117] "RemoveContainer" containerID="04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba" Oct 11 08:33:42 crc kubenswrapper[5016]: E1011 08:33:42.253720 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba\": container with ID starting with 04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba not found: ID does not exist" containerID="04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.253742 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba"} err="failed to get container status \"04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba\": rpc error: code = NotFound desc = could not find container \"04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba\": container with ID starting with 04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba not found: ID does not exist" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.253759 5016 scope.go:117] "RemoveContainer" containerID="6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.254018 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57"} err="failed to get container status \"6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57\": rpc error: code = NotFound desc = could not find container \"6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57\": container with ID starting with 6ef21b6a016fefb6937a9144a41a032ea32d077a760e06d5084b6bf93d41ba57 not found: ID does not exist" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.254033 5016 scope.go:117] "RemoveContainer" containerID="04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.254500 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba"} err="failed to get container status \"04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba\": rpc error: code = NotFound desc = could not find container \"04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba\": container with ID starting with 04084b2247fa2f101ab5211867b4273783572d1fcf3e9c0a335418cde50178ba not found: ID does not exist" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.312028 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-public-tls-certs\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.312158 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-config-data-custom\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.312188 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-config-data\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.312227 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfj9f\" (UniqueName: \"kubernetes.io/projected/d897f62c-8566-4445-8061-d77ce1ac2cd5-kube-api-access-kfj9f\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.312266 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d897f62c-8566-4445-8061-d77ce1ac2cd5-logs\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.312300 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-scripts\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.312329 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d897f62c-8566-4445-8061-d77ce1ac2cd5-etc-machine-id\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.312359 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.312401 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-internal-tls-certs\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.313272 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d897f62c-8566-4445-8061-d77ce1ac2cd5-logs\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.313512 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d897f62c-8566-4445-8061-d77ce1ac2cd5-etc-machine-id\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.317409 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.318388 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-scripts\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.319005 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-config-data\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.319806 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-config-data-custom\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.320371 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-public-tls-certs\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.321193 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d897f62c-8566-4445-8061-d77ce1ac2cd5-internal-tls-certs\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.338364 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfj9f\" (UniqueName: \"kubernetes.io/projected/d897f62c-8566-4445-8061-d77ce1ac2cd5-kube-api-access-kfj9f\") pod \"manila-api-0\" (UID: \"d897f62c-8566-4445-8061-d77ce1ac2cd5\") " pod="openstack/manila-api-0" Oct 11 08:33:42 crc kubenswrapper[5016]: I1011 08:33:42.549402 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 11 08:33:43 crc kubenswrapper[5016]: I1011 08:33:43.148435 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a2f5be-7ae8-4936-add8-c606e1fbab2d" path="/var/lib/kubelet/pods/75a2f5be-7ae8-4936-add8-c606e1fbab2d/volumes" Oct 11 08:33:43 crc kubenswrapper[5016]: I1011 08:33:43.211096 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 11 08:33:43 crc kubenswrapper[5016]: W1011 08:33:43.212619 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd897f62c_8566_4445_8061_d77ce1ac2cd5.slice/crio-e083736184b31a9a1a16e3451d44c2b5dc26dac1f81b55d1d332248dfcaed207 WatchSource:0}: Error finding container e083736184b31a9a1a16e3451d44c2b5dc26dac1f81b55d1d332248dfcaed207: Status 404 returned error can't find the container with id e083736184b31a9a1a16e3451d44c2b5dc26dac1f81b55d1d332248dfcaed207 Oct 11 08:33:43 crc kubenswrapper[5016]: I1011 08:33:43.560876 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:43 crc kubenswrapper[5016]: I1011 08:33:43.565137 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="ceilometer-central-agent" containerID="cri-o://92024d431f4574f45061d05b12c61dbf348bf5828cce0c4008faabad01e42c65" gracePeriod=30 Oct 11 08:33:43 crc kubenswrapper[5016]: I1011 08:33:43.565209 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="ceilometer-notification-agent" containerID="cri-o://704a25afd183f6a3a963cac10809ee59a9ddedbca1e75e2cf35f2279af182b8c" gracePeriod=30 Oct 11 08:33:43 crc kubenswrapper[5016]: I1011 08:33:43.565143 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="proxy-httpd" containerID="cri-o://24fc6c4453b265fd03d88a0a5b1646dc1744c36275020280067fe86ea048dded" gracePeriod=30 Oct 11 08:33:43 crc kubenswrapper[5016]: I1011 08:33:43.565420 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="sg-core" containerID="cri-o://3a35cb7c8bba2d3bf33f49611885969e9937c21cb3de207e7b3d18ca25f72ffc" gracePeriod=30 Oct 11 08:33:44 crc kubenswrapper[5016]: E1011 08:33:44.037249 5016 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78274b80_0332_4e1a_8860_1e11cac32d0b.slice/crio-24fc6c4453b265fd03d88a0a5b1646dc1744c36275020280067fe86ea048dded.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78274b80_0332_4e1a_8860_1e11cac32d0b.slice/crio-conmon-24fc6c4453b265fd03d88a0a5b1646dc1744c36275020280067fe86ea048dded.scope\": RecentStats: unable to find data in memory cache]" Oct 11 08:33:44 crc kubenswrapper[5016]: I1011 08:33:44.106378 5016 generic.go:334] "Generic (PLEG): container finished" podID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerID="24fc6c4453b265fd03d88a0a5b1646dc1744c36275020280067fe86ea048dded" exitCode=0 Oct 11 08:33:44 crc kubenswrapper[5016]: I1011 08:33:44.106419 5016 generic.go:334] "Generic (PLEG): container finished" podID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerID="3a35cb7c8bba2d3bf33f49611885969e9937c21cb3de207e7b3d18ca25f72ffc" exitCode=2 Oct 11 08:33:44 crc kubenswrapper[5016]: I1011 08:33:44.106429 5016 generic.go:334] "Generic (PLEG): container finished" podID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerID="92024d431f4574f45061d05b12c61dbf348bf5828cce0c4008faabad01e42c65" exitCode=0 Oct 11 08:33:44 crc kubenswrapper[5016]: I1011 08:33:44.106483 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerDied","Data":"24fc6c4453b265fd03d88a0a5b1646dc1744c36275020280067fe86ea048dded"} Oct 11 08:33:44 crc kubenswrapper[5016]: I1011 08:33:44.106515 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerDied","Data":"3a35cb7c8bba2d3bf33f49611885969e9937c21cb3de207e7b3d18ca25f72ffc"} Oct 11 08:33:44 crc kubenswrapper[5016]: I1011 08:33:44.106529 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerDied","Data":"92024d431f4574f45061d05b12c61dbf348bf5828cce0c4008faabad01e42c65"} Oct 11 08:33:44 crc kubenswrapper[5016]: I1011 08:33:44.115040 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"d897f62c-8566-4445-8061-d77ce1ac2cd5","Type":"ContainerStarted","Data":"36cbea526a5450238e4a638c6a606cc87e7fb19c47a4f1beaf9906ad56cd75c7"} Oct 11 08:33:44 crc kubenswrapper[5016]: I1011 08:33:44.115097 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"d897f62c-8566-4445-8061-d77ce1ac2cd5","Type":"ContainerStarted","Data":"e083736184b31a9a1a16e3451d44c2b5dc26dac1f81b55d1d332248dfcaed207"} Oct 11 08:33:45 crc kubenswrapper[5016]: I1011 08:33:45.129344 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"d897f62c-8566-4445-8061-d77ce1ac2cd5","Type":"ContainerStarted","Data":"b95e5cb4dff9daf0a294fa6832daa22631bdad73525e557a5750c60c3669a128"} Oct 11 08:33:45 crc kubenswrapper[5016]: I1011 08:33:45.129818 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Oct 11 08:33:45 crc kubenswrapper[5016]: I1011 08:33:45.164098 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.1640664689999998 podStartE2EDuration="3.164066469s" podCreationTimestamp="2025-10-11 08:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:33:45.150553362 +0000 UTC m=+3213.051009318" watchObservedRunningTime="2025-10-11 08:33:45.164066469 +0000 UTC m=+3213.064522415" Oct 11 08:33:47 crc kubenswrapper[5016]: I1011 08:33:47.753328 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 11 08:33:47 crc kubenswrapper[5016]: I1011 08:33:47.888152 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55d8975557-4nltc" Oct 11 08:33:47 crc kubenswrapper[5016]: I1011 08:33:47.949790 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f5d87575-clqzw"] Oct 11 08:33:47 crc kubenswrapper[5016]: I1011 08:33:47.950056 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" podUID="45b51e7a-9892-4e64-ba8e-13e58364666b" containerName="dnsmasq-dns" containerID="cri-o://6b236323e7f7bbcd9e8ffec90d8e40b6cf0e43fd9152aef1cf6e626c4aa0c639" gracePeriod=10 Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.180091 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" event={"ID":"45b51e7a-9892-4e64-ba8e-13e58364666b","Type":"ContainerDied","Data":"6b236323e7f7bbcd9e8ffec90d8e40b6cf0e43fd9152aef1cf6e626c4aa0c639"} Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.180191 5016 generic.go:334] "Generic (PLEG): container finished" podID="45b51e7a-9892-4e64-ba8e-13e58364666b" containerID="6b236323e7f7bbcd9e8ffec90d8e40b6cf0e43fd9152aef1cf6e626c4aa0c639" exitCode=0 Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.186007 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"bde3fad2-f81f-4252-90a9-9084a164a3bd","Type":"ContainerStarted","Data":"a0f4c44ba6ad173d0c13f6454b4829a6c80b056643334f0c6ae653dc07045661"} Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.509379 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.599302 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jszk4\" (UniqueName: \"kubernetes.io/projected/45b51e7a-9892-4e64-ba8e-13e58364666b-kube-api-access-jszk4\") pod \"45b51e7a-9892-4e64-ba8e-13e58364666b\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.599694 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-openstack-edpm-ipam\") pod \"45b51e7a-9892-4e64-ba8e-13e58364666b\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.599916 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-dns-svc\") pod \"45b51e7a-9892-4e64-ba8e-13e58364666b\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.599981 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-nb\") pod \"45b51e7a-9892-4e64-ba8e-13e58364666b\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.600008 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-config\") pod \"45b51e7a-9892-4e64-ba8e-13e58364666b\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.600029 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-sb\") pod \"45b51e7a-9892-4e64-ba8e-13e58364666b\" (UID: \"45b51e7a-9892-4e64-ba8e-13e58364666b\") " Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.610238 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b51e7a-9892-4e64-ba8e-13e58364666b-kube-api-access-jszk4" (OuterVolumeSpecName: "kube-api-access-jszk4") pod "45b51e7a-9892-4e64-ba8e-13e58364666b" (UID: "45b51e7a-9892-4e64-ba8e-13e58364666b"). InnerVolumeSpecName "kube-api-access-jszk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.668032 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45b51e7a-9892-4e64-ba8e-13e58364666b" (UID: "45b51e7a-9892-4e64-ba8e-13e58364666b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.669582 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-config" (OuterVolumeSpecName: "config") pod "45b51e7a-9892-4e64-ba8e-13e58364666b" (UID: "45b51e7a-9892-4e64-ba8e-13e58364666b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.671696 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "45b51e7a-9892-4e64-ba8e-13e58364666b" (UID: "45b51e7a-9892-4e64-ba8e-13e58364666b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.672686 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45b51e7a-9892-4e64-ba8e-13e58364666b" (UID: "45b51e7a-9892-4e64-ba8e-13e58364666b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.675764 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "45b51e7a-9892-4e64-ba8e-13e58364666b" (UID: "45b51e7a-9892-4e64-ba8e-13e58364666b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.702084 5016 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.702120 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.702135 5016 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-config\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.702144 5016 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.702155 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jszk4\" (UniqueName: \"kubernetes.io/projected/45b51e7a-9892-4e64-ba8e-13e58364666b-kube-api-access-jszk4\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:48 crc kubenswrapper[5016]: I1011 08:33:48.702167 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/45b51e7a-9892-4e64-ba8e-13e58364666b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:49 crc kubenswrapper[5016]: I1011 08:33:49.195982 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" event={"ID":"45b51e7a-9892-4e64-ba8e-13e58364666b","Type":"ContainerDied","Data":"fc235fba927daac06ea270f72880d79285bf1d80afccc5b83a22a7664cb88b16"} Oct 11 08:33:49 crc kubenswrapper[5016]: I1011 08:33:49.196041 5016 scope.go:117] "RemoveContainer" containerID="6b236323e7f7bbcd9e8ffec90d8e40b6cf0e43fd9152aef1cf6e626c4aa0c639" Oct 11 08:33:49 crc kubenswrapper[5016]: I1011 08:33:49.196180 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f5d87575-clqzw" Oct 11 08:33:49 crc kubenswrapper[5016]: I1011 08:33:49.201131 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"bde3fad2-f81f-4252-90a9-9084a164a3bd","Type":"ContainerStarted","Data":"31c6436d370f990428a096cc78b0cc7d799be25c520352c7693183f421ed581b"} Oct 11 08:33:49 crc kubenswrapper[5016]: I1011 08:33:49.221609 5016 scope.go:117] "RemoveContainer" containerID="89c38a0433a54425aa3d68e595eb80532309286ba30f24e1de9b53bef9ab6692" Oct 11 08:33:49 crc kubenswrapper[5016]: I1011 08:33:49.259852 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.46790425 podStartE2EDuration="12.259830691s" podCreationTimestamp="2025-10-11 08:33:37 +0000 UTC" firstStartedPulling="2025-10-11 08:33:38.642744077 +0000 UTC m=+3206.543200023" lastFinishedPulling="2025-10-11 08:33:47.434670518 +0000 UTC m=+3215.335126464" observedRunningTime="2025-10-11 08:33:49.23790682 +0000 UTC m=+3217.138362766" watchObservedRunningTime="2025-10-11 08:33:49.259830691 +0000 UTC m=+3217.160286637" Oct 11 08:33:49 crc kubenswrapper[5016]: I1011 08:33:49.267031 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f5d87575-clqzw"] Oct 11 08:33:49 crc kubenswrapper[5016]: I1011 08:33:49.278523 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f5d87575-clqzw"] Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.213221 5016 generic.go:334] "Generic (PLEG): container finished" podID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerID="704a25afd183f6a3a963cac10809ee59a9ddedbca1e75e2cf35f2279af182b8c" exitCode=0 Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.214813 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerDied","Data":"704a25afd183f6a3a963cac10809ee59a9ddedbca1e75e2cf35f2279af182b8c"} Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.214853 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78274b80-0332-4e1a-8860-1e11cac32d0b","Type":"ContainerDied","Data":"cf16bc5a862699816f3f2189fa8f6a9853dbc6ab7071634aa670f16993cc451a"} Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.214868 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf16bc5a862699816f3f2189fa8f6a9853dbc6ab7071634aa670f16993cc451a" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.291584 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.334827 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-log-httpd\") pod \"78274b80-0332-4e1a-8860-1e11cac32d0b\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.334894 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2rxd\" (UniqueName: \"kubernetes.io/projected/78274b80-0332-4e1a-8860-1e11cac32d0b-kube-api-access-m2rxd\") pod \"78274b80-0332-4e1a-8860-1e11cac32d0b\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.335026 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-combined-ca-bundle\") pod \"78274b80-0332-4e1a-8860-1e11cac32d0b\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.335046 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-scripts\") pod \"78274b80-0332-4e1a-8860-1e11cac32d0b\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.335155 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-run-httpd\") pod \"78274b80-0332-4e1a-8860-1e11cac32d0b\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.335190 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-ceilometer-tls-certs\") pod \"78274b80-0332-4e1a-8860-1e11cac32d0b\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.335228 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-sg-core-conf-yaml\") pod \"78274b80-0332-4e1a-8860-1e11cac32d0b\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.335280 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-config-data\") pod \"78274b80-0332-4e1a-8860-1e11cac32d0b\" (UID: \"78274b80-0332-4e1a-8860-1e11cac32d0b\") " Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.335789 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78274b80-0332-4e1a-8860-1e11cac32d0b" (UID: "78274b80-0332-4e1a-8860-1e11cac32d0b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.341696 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78274b80-0332-4e1a-8860-1e11cac32d0b" (UID: "78274b80-0332-4e1a-8860-1e11cac32d0b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.366642 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-scripts" (OuterVolumeSpecName: "scripts") pod "78274b80-0332-4e1a-8860-1e11cac32d0b" (UID: "78274b80-0332-4e1a-8860-1e11cac32d0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.367871 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78274b80-0332-4e1a-8860-1e11cac32d0b-kube-api-access-m2rxd" (OuterVolumeSpecName: "kube-api-access-m2rxd") pod "78274b80-0332-4e1a-8860-1e11cac32d0b" (UID: "78274b80-0332-4e1a-8860-1e11cac32d0b"). InnerVolumeSpecName "kube-api-access-m2rxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.388360 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78274b80-0332-4e1a-8860-1e11cac32d0b" (UID: "78274b80-0332-4e1a-8860-1e11cac32d0b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.415841 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "78274b80-0332-4e1a-8860-1e11cac32d0b" (UID: "78274b80-0332-4e1a-8860-1e11cac32d0b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.438472 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.438517 5016 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.438530 5016 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.438544 5016 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.438557 5016 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78274b80-0332-4e1a-8860-1e11cac32d0b-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.438569 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2rxd\" (UniqueName: \"kubernetes.io/projected/78274b80-0332-4e1a-8860-1e11cac32d0b-kube-api-access-m2rxd\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.479057 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-config-data" (OuterVolumeSpecName: "config-data") pod "78274b80-0332-4e1a-8860-1e11cac32d0b" (UID: "78274b80-0332-4e1a-8860-1e11cac32d0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.479499 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78274b80-0332-4e1a-8860-1e11cac32d0b" (UID: "78274b80-0332-4e1a-8860-1e11cac32d0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.541283 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:50 crc kubenswrapper[5016]: I1011 08:33:50.541329 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78274b80-0332-4e1a-8860-1e11cac32d0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.148515 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b51e7a-9892-4e64-ba8e-13e58364666b" path="/var/lib/kubelet/pods/45b51e7a-9892-4e64-ba8e-13e58364666b/volumes" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.232553 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.275858 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.296716 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.309863 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:51 crc kubenswrapper[5016]: E1011 08:33:51.310388 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="proxy-httpd" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310411 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="proxy-httpd" Oct 11 08:33:51 crc kubenswrapper[5016]: E1011 08:33:51.310426 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="ceilometer-notification-agent" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310434 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="ceilometer-notification-agent" Oct 11 08:33:51 crc kubenswrapper[5016]: E1011 08:33:51.310444 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b51e7a-9892-4e64-ba8e-13e58364666b" containerName="init" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310454 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b51e7a-9892-4e64-ba8e-13e58364666b" containerName="init" Oct 11 08:33:51 crc kubenswrapper[5016]: E1011 08:33:51.310470 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b51e7a-9892-4e64-ba8e-13e58364666b" containerName="dnsmasq-dns" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310476 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b51e7a-9892-4e64-ba8e-13e58364666b" containerName="dnsmasq-dns" Oct 11 08:33:51 crc kubenswrapper[5016]: E1011 08:33:51.310486 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="ceilometer-central-agent" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310492 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="ceilometer-central-agent" Oct 11 08:33:51 crc kubenswrapper[5016]: E1011 08:33:51.310504 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="sg-core" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310512 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="sg-core" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310759 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="proxy-httpd" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310773 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b51e7a-9892-4e64-ba8e-13e58364666b" containerName="dnsmasq-dns" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310788 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="ceilometer-central-agent" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310799 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="sg-core" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.310816 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" containerName="ceilometer-notification-agent" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.312723 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.318460 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.318693 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.318768 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.327357 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.357497 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-run-httpd\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.357641 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-log-httpd\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.357679 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.357798 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.357837 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.358218 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-scripts\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.358296 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-config-data\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.358327 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8lk\" (UniqueName: \"kubernetes.io/projected/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-kube-api-access-rb8lk\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.368899 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:51 crc kubenswrapper[5016]: E1011 08:33:51.369814 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-rb8lk log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="604b434d-d55e-46cc-a2c1-7cb6fecdc40f" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.460092 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.460161 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.460197 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-scripts\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.460226 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-config-data\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.460249 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb8lk\" (UniqueName: \"kubernetes.io/projected/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-kube-api-access-rb8lk\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.460291 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-run-httpd\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.460333 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-log-httpd\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.460352 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.461332 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-log-httpd\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.461397 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-run-httpd\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.466673 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.468341 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-config-data\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.468885 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.468985 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.481151 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-scripts\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:51 crc kubenswrapper[5016]: I1011 08:33:51.489637 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb8lk\" (UniqueName: \"kubernetes.io/projected/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-kube-api-access-rb8lk\") pod \"ceilometer-0\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " pod="openstack/ceilometer-0" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.242005 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.258530 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.280268 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-run-httpd\") pod \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.280394 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-scripts\") pod \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.280459 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-sg-core-conf-yaml\") pod \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.280585 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-config-data\") pod \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.280638 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb8lk\" (UniqueName: \"kubernetes.io/projected/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-kube-api-access-rb8lk\") pod \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.280755 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-ceilometer-tls-certs\") pod \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.280856 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-log-httpd\") pod \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.281000 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-combined-ca-bundle\") pod \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\" (UID: \"604b434d-d55e-46cc-a2c1-7cb6fecdc40f\") " Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.281431 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "604b434d-d55e-46cc-a2c1-7cb6fecdc40f" (UID: "604b434d-d55e-46cc-a2c1-7cb6fecdc40f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.281901 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "604b434d-d55e-46cc-a2c1-7cb6fecdc40f" (UID: "604b434d-d55e-46cc-a2c1-7cb6fecdc40f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.281979 5016 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.290409 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "604b434d-d55e-46cc-a2c1-7cb6fecdc40f" (UID: "604b434d-d55e-46cc-a2c1-7cb6fecdc40f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.291175 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-scripts" (OuterVolumeSpecName: "scripts") pod "604b434d-d55e-46cc-a2c1-7cb6fecdc40f" (UID: "604b434d-d55e-46cc-a2c1-7cb6fecdc40f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.292093 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-kube-api-access-rb8lk" (OuterVolumeSpecName: "kube-api-access-rb8lk") pod "604b434d-d55e-46cc-a2c1-7cb6fecdc40f" (UID: "604b434d-d55e-46cc-a2c1-7cb6fecdc40f"). InnerVolumeSpecName "kube-api-access-rb8lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.292676 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-config-data" (OuterVolumeSpecName: "config-data") pod "604b434d-d55e-46cc-a2c1-7cb6fecdc40f" (UID: "604b434d-d55e-46cc-a2c1-7cb6fecdc40f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.292650 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "604b434d-d55e-46cc-a2c1-7cb6fecdc40f" (UID: "604b434d-d55e-46cc-a2c1-7cb6fecdc40f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.295093 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "604b434d-d55e-46cc-a2c1-7cb6fecdc40f" (UID: "604b434d-d55e-46cc-a2c1-7cb6fecdc40f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.384373 5016 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.384421 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.384457 5016 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.384472 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.384484 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb8lk\" (UniqueName: \"kubernetes.io/projected/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-kube-api-access-rb8lk\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.384497 5016 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:52 crc kubenswrapper[5016]: I1011 08:33:52.384510 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/604b434d-d55e-46cc-a2c1-7cb6fecdc40f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.165305 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78274b80-0332-4e1a-8860-1e11cac32d0b" path="/var/lib/kubelet/pods/78274b80-0332-4e1a-8860-1e11cac32d0b/volumes" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.256590 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.340185 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.355977 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.369003 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.376941 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.377366 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.380965 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.381284 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.381890 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.411275 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-run-httpd\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.411386 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xkjl\" (UniqueName: \"kubernetes.io/projected/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-kube-api-access-9xkjl\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.411448 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.411552 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.411696 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-scripts\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.411737 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-log-httpd\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.411765 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-config-data\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.411801 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.514928 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-run-httpd\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.515513 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xkjl\" (UniqueName: \"kubernetes.io/projected/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-kube-api-access-9xkjl\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.515564 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.515610 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.515718 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-run-httpd\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.515738 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-scripts\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.515877 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-config-data\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.515961 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-log-httpd\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.516048 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.516579 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-log-httpd\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.524606 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.525588 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.525612 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-config-data\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.526756 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-scripts\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.528790 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.542510 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xkjl\" (UniqueName: \"kubernetes.io/projected/ae7b0f07-6360-46c1-8bc1-f89c5ac7a486-kube-api-access-9xkjl\") pod \"ceilometer-0\" (UID: \"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486\") " pod="openstack/ceilometer-0" Oct 11 08:33:53 crc kubenswrapper[5016]: I1011 08:33:53.717538 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 11 08:33:54 crc kubenswrapper[5016]: I1011 08:33:54.259145 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 11 08:33:55 crc kubenswrapper[5016]: I1011 08:33:55.154198 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="604b434d-d55e-46cc-a2c1-7cb6fecdc40f" path="/var/lib/kubelet/pods/604b434d-d55e-46cc-a2c1-7cb6fecdc40f/volumes" Oct 11 08:33:55 crc kubenswrapper[5016]: I1011 08:33:55.292567 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486","Type":"ContainerStarted","Data":"ad0480fdb1e52e675b944bab407e3e4dd0baad19c902d27893361774d4054179"} Oct 11 08:33:55 crc kubenswrapper[5016]: I1011 08:33:55.292632 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486","Type":"ContainerStarted","Data":"b7d62cfe02b3eda211d2df46ab70aa4824df5e30062926b89d6cff334dfdce2e"} Oct 11 08:33:56 crc kubenswrapper[5016]: I1011 08:33:56.307502 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486","Type":"ContainerStarted","Data":"289094e484ab6e5ae816cf5b033624d61d668bdc74be62743f168916712ab17f"} Oct 11 08:33:57 crc kubenswrapper[5016]: I1011 08:33:57.347475 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486","Type":"ContainerStarted","Data":"302e5c1d51fb2783b06fa4aa63033b1686862c684d3831ce5e2c4d8d2abf1588"} Oct 11 08:33:57 crc kubenswrapper[5016]: I1011 08:33:57.867333 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 11 08:33:58 crc kubenswrapper[5016]: I1011 08:33:58.363001 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486","Type":"ContainerStarted","Data":"d24733b46c411f60994a4fe3767f3d3d483a7bb16858b66904876c5132964fae"} Oct 11 08:33:58 crc kubenswrapper[5016]: I1011 08:33:58.363556 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 11 08:33:58 crc kubenswrapper[5016]: I1011 08:33:58.406448 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.692632627 podStartE2EDuration="5.406423414s" podCreationTimestamp="2025-10-11 08:33:53 +0000 UTC" firstStartedPulling="2025-10-11 08:33:54.276084777 +0000 UTC m=+3222.176540723" lastFinishedPulling="2025-10-11 08:33:57.989875564 +0000 UTC m=+3225.890331510" observedRunningTime="2025-10-11 08:33:58.396920383 +0000 UTC m=+3226.297376329" watchObservedRunningTime="2025-10-11 08:33:58.406423414 +0000 UTC m=+3226.306879350" Oct 11 08:33:59 crc kubenswrapper[5016]: I1011 08:33:59.377104 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 11 08:33:59 crc kubenswrapper[5016]: I1011 08:33:59.478827 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 11 08:34:00 crc kubenswrapper[5016]: I1011 08:34:00.392320 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerName="manila-scheduler" containerID="cri-o://4cfb6fdd065b8f88a657cf8387faac2942b2aff16400c75ebbe895f7365ac48a" gracePeriod=30 Oct 11 08:34:00 crc kubenswrapper[5016]: I1011 08:34:00.392445 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerName="probe" containerID="cri-o://38421d790aa1c7307a120598b87dcd146deb3b0f8ab3fc3334d364d245b6e89b" gracePeriod=30 Oct 11 08:34:01 crc kubenswrapper[5016]: I1011 08:34:01.407120 5016 generic.go:334] "Generic (PLEG): container finished" podID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerID="38421d790aa1c7307a120598b87dcd146deb3b0f8ab3fc3334d364d245b6e89b" exitCode=0 Oct 11 08:34:01 crc kubenswrapper[5016]: I1011 08:34:01.407501 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"68f34230-64eb-429d-82fe-e1e15a3f6dfd","Type":"ContainerDied","Data":"38421d790aa1c7307a120598b87dcd146deb3b0f8ab3fc3334d364d245b6e89b"} Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.762608 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5j9t6"] Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.765329 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.780707 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-utilities\") pod \"redhat-operators-5j9t6\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.780843 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-catalog-content\") pod \"redhat-operators-5j9t6\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.780912 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srksb\" (UniqueName: \"kubernetes.io/projected/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-kube-api-access-srksb\") pod \"redhat-operators-5j9t6\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.787579 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5j9t6"] Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.883113 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-catalog-content\") pod \"redhat-operators-5j9t6\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.883226 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srksb\" (UniqueName: \"kubernetes.io/projected/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-kube-api-access-srksb\") pod \"redhat-operators-5j9t6\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.883331 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-utilities\") pod \"redhat-operators-5j9t6\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.884052 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-utilities\") pod \"redhat-operators-5j9t6\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.884137 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-catalog-content\") pod \"redhat-operators-5j9t6\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:02 crc kubenswrapper[5016]: I1011 08:34:02.911450 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srksb\" (UniqueName: \"kubernetes.io/projected/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-kube-api-access-srksb\") pod \"redhat-operators-5j9t6\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:03 crc kubenswrapper[5016]: I1011 08:34:03.093498 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:03 crc kubenswrapper[5016]: I1011 08:34:03.653539 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5j9t6"] Oct 11 08:34:04 crc kubenswrapper[5016]: I1011 08:34:04.444597 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5j9t6" event={"ID":"6c3da8dd-260c-4b29-aa18-b2619ca9c93b","Type":"ContainerStarted","Data":"05d69a9fc4daead35d654b447d97ac413eca7227bde48399530a1dbcc9d99d53"} Oct 11 08:34:04 crc kubenswrapper[5016]: I1011 08:34:04.450354 5016 generic.go:334] "Generic (PLEG): container finished" podID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerID="4cfb6fdd065b8f88a657cf8387faac2942b2aff16400c75ebbe895f7365ac48a" exitCode=0 Oct 11 08:34:04 crc kubenswrapper[5016]: I1011 08:34:04.450452 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"68f34230-64eb-429d-82fe-e1e15a3f6dfd","Type":"ContainerDied","Data":"4cfb6fdd065b8f88a657cf8387faac2942b2aff16400c75ebbe895f7365ac48a"} Oct 11 08:34:04 crc kubenswrapper[5016]: I1011 08:34:04.822807 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.159041 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.266503 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-scripts\") pod \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.266561 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data\") pod \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.266596 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f34230-64eb-429d-82fe-e1e15a3f6dfd-etc-machine-id\") pod \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.266638 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-combined-ca-bundle\") pod \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.266739 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvzbr\" (UniqueName: \"kubernetes.io/projected/68f34230-64eb-429d-82fe-e1e15a3f6dfd-kube-api-access-jvzbr\") pod \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.266775 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data-custom\") pod \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\" (UID: \"68f34230-64eb-429d-82fe-e1e15a3f6dfd\") " Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.267386 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68f34230-64eb-429d-82fe-e1e15a3f6dfd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "68f34230-64eb-429d-82fe-e1e15a3f6dfd" (UID: "68f34230-64eb-429d-82fe-e1e15a3f6dfd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.274759 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68f34230-64eb-429d-82fe-e1e15a3f6dfd-kube-api-access-jvzbr" (OuterVolumeSpecName: "kube-api-access-jvzbr") pod "68f34230-64eb-429d-82fe-e1e15a3f6dfd" (UID: "68f34230-64eb-429d-82fe-e1e15a3f6dfd"). InnerVolumeSpecName "kube-api-access-jvzbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.275425 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "68f34230-64eb-429d-82fe-e1e15a3f6dfd" (UID: "68f34230-64eb-429d-82fe-e1e15a3f6dfd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.284214 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-scripts" (OuterVolumeSpecName: "scripts") pod "68f34230-64eb-429d-82fe-e1e15a3f6dfd" (UID: "68f34230-64eb-429d-82fe-e1e15a3f6dfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.330435 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68f34230-64eb-429d-82fe-e1e15a3f6dfd" (UID: "68f34230-64eb-429d-82fe-e1e15a3f6dfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.370793 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.370830 5016 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/68f34230-64eb-429d-82fe-e1e15a3f6dfd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.370846 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.370861 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvzbr\" (UniqueName: \"kubernetes.io/projected/68f34230-64eb-429d-82fe-e1e15a3f6dfd-kube-api-access-jvzbr\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.370874 5016 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.397855 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data" (OuterVolumeSpecName: "config-data") pod "68f34230-64eb-429d-82fe-e1e15a3f6dfd" (UID: "68f34230-64eb-429d-82fe-e1e15a3f6dfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.462772 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"68f34230-64eb-429d-82fe-e1e15a3f6dfd","Type":"ContainerDied","Data":"29bf16b3c790cc189da3570be79424feff36c689197853c4b083ab9e1bcb2a23"} Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.462837 5016 scope.go:117] "RemoveContainer" containerID="38421d790aa1c7307a120598b87dcd146deb3b0f8ab3fc3334d364d245b6e89b" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.464241 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.464969 5016 generic.go:334] "Generic (PLEG): container finished" podID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerID="dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9" exitCode=0 Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.465006 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5j9t6" event={"ID":"6c3da8dd-260c-4b29-aa18-b2619ca9c93b","Type":"ContainerDied","Data":"dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9"} Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.474311 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f34230-64eb-429d-82fe-e1e15a3f6dfd-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.506895 5016 scope.go:117] "RemoveContainer" containerID="4cfb6fdd065b8f88a657cf8387faac2942b2aff16400c75ebbe895f7365ac48a" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.533137 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.547667 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.560453 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 11 08:34:05 crc kubenswrapper[5016]: E1011 08:34:05.561054 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerName="probe" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.561079 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerName="probe" Oct 11 08:34:05 crc kubenswrapper[5016]: E1011 08:34:05.561130 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerName="manila-scheduler" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.561139 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerName="manila-scheduler" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.561338 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerName="probe" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.561396 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" containerName="manila-scheduler" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.562879 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.565518 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.577627 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.680622 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-config-data\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.680745 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-scripts\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.681379 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.681443 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px7kj\" (UniqueName: \"kubernetes.io/projected/055e76cd-8fd8-437e-a065-6d64398ce2dd-kube-api-access-px7kj\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.681548 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/055e76cd-8fd8-437e-a065-6d64398ce2dd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.681594 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.784021 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/055e76cd-8fd8-437e-a065-6d64398ce2dd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.784097 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.784204 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-config-data\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.784219 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/055e76cd-8fd8-437e-a065-6d64398ce2dd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.784253 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-scripts\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.784878 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.784927 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px7kj\" (UniqueName: \"kubernetes.io/projected/055e76cd-8fd8-437e-a065-6d64398ce2dd-kube-api-access-px7kj\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.789268 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-scripts\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.789968 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-config-data\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.789976 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.792334 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/055e76cd-8fd8-437e-a065-6d64398ce2dd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.804353 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px7kj\" (UniqueName: \"kubernetes.io/projected/055e76cd-8fd8-437e-a065-6d64398ce2dd-kube-api-access-px7kj\") pod \"manila-scheduler-0\" (UID: \"055e76cd-8fd8-437e-a065-6d64398ce2dd\") " pod="openstack/manila-scheduler-0" Oct 11 08:34:05 crc kubenswrapper[5016]: I1011 08:34:05.887981 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 11 08:34:06 crc kubenswrapper[5016]: I1011 08:34:06.439280 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 11 08:34:06 crc kubenswrapper[5016]: I1011 08:34:06.483233 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5j9t6" event={"ID":"6c3da8dd-260c-4b29-aa18-b2619ca9c93b","Type":"ContainerStarted","Data":"c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564"} Oct 11 08:34:06 crc kubenswrapper[5016]: I1011 08:34:06.540204 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"055e76cd-8fd8-437e-a065-6d64398ce2dd","Type":"ContainerStarted","Data":"2a945acb2e7c68b1579cbc71716de745dda96106ab96684ee861982b18a46509"} Oct 11 08:34:07 crc kubenswrapper[5016]: I1011 08:34:07.157196 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68f34230-64eb-429d-82fe-e1e15a3f6dfd" path="/var/lib/kubelet/pods/68f34230-64eb-429d-82fe-e1e15a3f6dfd/volumes" Oct 11 08:34:07 crc kubenswrapper[5016]: I1011 08:34:07.551245 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"055e76cd-8fd8-437e-a065-6d64398ce2dd","Type":"ContainerStarted","Data":"27f75e67fa6c886339366f8748447321e05d6b40b8efddcbd37ad5e94fc6ed29"} Oct 11 08:34:08 crc kubenswrapper[5016]: I1011 08:34:08.563275 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"055e76cd-8fd8-437e-a065-6d64398ce2dd","Type":"ContainerStarted","Data":"c36cd274acaf8aecc5567e37a17d1eccde0bba0a652601a13f12a847f3626b79"} Oct 11 08:34:08 crc kubenswrapper[5016]: I1011 08:34:08.570679 5016 generic.go:334] "Generic (PLEG): container finished" podID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerID="c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564" exitCode=0 Oct 11 08:34:08 crc kubenswrapper[5016]: I1011 08:34:08.570921 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5j9t6" event={"ID":"6c3da8dd-260c-4b29-aa18-b2619ca9c93b","Type":"ContainerDied","Data":"c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564"} Oct 11 08:34:08 crc kubenswrapper[5016]: I1011 08:34:08.609521 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.609460773 podStartE2EDuration="3.609460773s" podCreationTimestamp="2025-10-11 08:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:34:08.597700152 +0000 UTC m=+3236.498156098" watchObservedRunningTime="2025-10-11 08:34:08.609460773 +0000 UTC m=+3236.509916719" Oct 11 08:34:09 crc kubenswrapper[5016]: I1011 08:34:09.540178 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 11 08:34:09 crc kubenswrapper[5016]: I1011 08:34:09.585580 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5j9t6" event={"ID":"6c3da8dd-260c-4b29-aa18-b2619ca9c93b","Type":"ContainerStarted","Data":"a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94"} Oct 11 08:34:09 crc kubenswrapper[5016]: I1011 08:34:09.627383 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 11 08:34:09 crc kubenswrapper[5016]: I1011 08:34:09.627680 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerName="manila-share" containerID="cri-o://a0f4c44ba6ad173d0c13f6454b4829a6c80b056643334f0c6ae653dc07045661" gracePeriod=30 Oct 11 08:34:09 crc kubenswrapper[5016]: I1011 08:34:09.627738 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerName="probe" containerID="cri-o://31c6436d370f990428a096cc78b0cc7d799be25c520352c7693183f421ed581b" gracePeriod=30 Oct 11 08:34:09 crc kubenswrapper[5016]: I1011 08:34:09.645746 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5j9t6" podStartSLOduration=4.143968515 podStartE2EDuration="7.645723496s" podCreationTimestamp="2025-10-11 08:34:02 +0000 UTC" firstStartedPulling="2025-10-11 08:34:05.467471669 +0000 UTC m=+3233.367927615" lastFinishedPulling="2025-10-11 08:34:08.96922662 +0000 UTC m=+3236.869682596" observedRunningTime="2025-10-11 08:34:09.639975753 +0000 UTC m=+3237.540431709" watchObservedRunningTime="2025-10-11 08:34:09.645723496 +0000 UTC m=+3237.546179442" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.598610 5016 generic.go:334] "Generic (PLEG): container finished" podID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerID="31c6436d370f990428a096cc78b0cc7d799be25c520352c7693183f421ed581b" exitCode=0 Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.598650 5016 generic.go:334] "Generic (PLEG): container finished" podID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerID="a0f4c44ba6ad173d0c13f6454b4829a6c80b056643334f0c6ae653dc07045661" exitCode=1 Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.598684 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"bde3fad2-f81f-4252-90a9-9084a164a3bd","Type":"ContainerDied","Data":"31c6436d370f990428a096cc78b0cc7d799be25c520352c7693183f421ed581b"} Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.598710 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"bde3fad2-f81f-4252-90a9-9084a164a3bd","Type":"ContainerDied","Data":"a0f4c44ba6ad173d0c13f6454b4829a6c80b056643334f0c6ae653dc07045661"} Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.824467 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.831543 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-scripts\") pod \"bde3fad2-f81f-4252-90a9-9084a164a3bd\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.831598 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-ceph\") pod \"bde3fad2-f81f-4252-90a9-9084a164a3bd\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.831644 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-combined-ca-bundle\") pod \"bde3fad2-f81f-4252-90a9-9084a164a3bd\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.831739 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data-custom\") pod \"bde3fad2-f81f-4252-90a9-9084a164a3bd\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.831930 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data\") pod \"bde3fad2-f81f-4252-90a9-9084a164a3bd\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.831983 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-etc-machine-id\") pod \"bde3fad2-f81f-4252-90a9-9084a164a3bd\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.832046 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcdwn\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-kube-api-access-vcdwn\") pod \"bde3fad2-f81f-4252-90a9-9084a164a3bd\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.832073 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-var-lib-manila\") pod \"bde3fad2-f81f-4252-90a9-9084a164a3bd\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.834567 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bde3fad2-f81f-4252-90a9-9084a164a3bd" (UID: "bde3fad2-f81f-4252-90a9-9084a164a3bd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.834695 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "bde3fad2-f81f-4252-90a9-9084a164a3bd" (UID: "bde3fad2-f81f-4252-90a9-9084a164a3bd"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.843339 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-kube-api-access-vcdwn" (OuterVolumeSpecName: "kube-api-access-vcdwn") pod "bde3fad2-f81f-4252-90a9-9084a164a3bd" (UID: "bde3fad2-f81f-4252-90a9-9084a164a3bd"). InnerVolumeSpecName "kube-api-access-vcdwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.845097 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-ceph" (OuterVolumeSpecName: "ceph") pod "bde3fad2-f81f-4252-90a9-9084a164a3bd" (UID: "bde3fad2-f81f-4252-90a9-9084a164a3bd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.857058 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bde3fad2-f81f-4252-90a9-9084a164a3bd" (UID: "bde3fad2-f81f-4252-90a9-9084a164a3bd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.864651 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-scripts" (OuterVolumeSpecName: "scripts") pod "bde3fad2-f81f-4252-90a9-9084a164a3bd" (UID: "bde3fad2-f81f-4252-90a9-9084a164a3bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.935792 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bde3fad2-f81f-4252-90a9-9084a164a3bd" (UID: "bde3fad2-f81f-4252-90a9-9084a164a3bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.936020 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-combined-ca-bundle\") pod \"bde3fad2-f81f-4252-90a9-9084a164a3bd\" (UID: \"bde3fad2-f81f-4252-90a9-9084a164a3bd\") " Oct 11 08:34:10 crc kubenswrapper[5016]: W1011 08:34:10.936162 5016 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bde3fad2-f81f-4252-90a9-9084a164a3bd/volumes/kubernetes.io~secret/combined-ca-bundle Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.936183 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bde3fad2-f81f-4252-90a9-9084a164a3bd" (UID: "bde3fad2-f81f-4252-90a9-9084a164a3bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.936714 5016 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.936735 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcdwn\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-kube-api-access-vcdwn\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.936749 5016 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/bde3fad2-f81f-4252-90a9-9084a164a3bd-var-lib-manila\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.936760 5016 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-scripts\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.936771 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bde3fad2-f81f-4252-90a9-9084a164a3bd-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.936778 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.936787 5016 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:10 crc kubenswrapper[5016]: I1011 08:34:10.991929 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data" (OuterVolumeSpecName: "config-data") pod "bde3fad2-f81f-4252-90a9-9084a164a3bd" (UID: "bde3fad2-f81f-4252-90a9-9084a164a3bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.038962 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde3fad2-f81f-4252-90a9-9084a164a3bd-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.625235 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"bde3fad2-f81f-4252-90a9-9084a164a3bd","Type":"ContainerDied","Data":"cc351c3f715d5e9fd16fb1a3ac9f62a0c8fcc5564b49be7dc5889669a692142f"} Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.625314 5016 scope.go:117] "RemoveContainer" containerID="31c6436d370f990428a096cc78b0cc7d799be25c520352c7693183f421ed581b" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.625471 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.662903 5016 scope.go:117] "RemoveContainer" containerID="a0f4c44ba6ad173d0c13f6454b4829a6c80b056643334f0c6ae653dc07045661" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.677590 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.694236 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.706368 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 11 08:34:11 crc kubenswrapper[5016]: E1011 08:34:11.707125 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerName="probe" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.707152 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerName="probe" Oct 11 08:34:11 crc kubenswrapper[5016]: E1011 08:34:11.707185 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerName="manila-share" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.707195 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerName="manila-share" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.707467 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerName="manila-share" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.707503 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde3fad2-f81f-4252-90a9-9084a164a3bd" containerName="probe" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.708985 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.722371 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.734420 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.762781 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/99758dd3-4691-42ed-a3eb-aead6855e030-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.762853 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.763016 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.763116 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-config-data\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.763256 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/99758dd3-4691-42ed-a3eb-aead6855e030-ceph\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.763424 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95dpb\" (UniqueName: \"kubernetes.io/projected/99758dd3-4691-42ed-a3eb-aead6855e030-kube-api-access-95dpb\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.763558 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-scripts\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.763601 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99758dd3-4691-42ed-a3eb-aead6855e030-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864282 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99758dd3-4691-42ed-a3eb-aead6855e030-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864333 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/99758dd3-4691-42ed-a3eb-aead6855e030-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864360 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864414 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864443 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-config-data\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864456 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/99758dd3-4691-42ed-a3eb-aead6855e030-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864482 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/99758dd3-4691-42ed-a3eb-aead6855e030-ceph\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864456 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99758dd3-4691-42ed-a3eb-aead6855e030-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864844 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95dpb\" (UniqueName: \"kubernetes.io/projected/99758dd3-4691-42ed-a3eb-aead6855e030-kube-api-access-95dpb\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.864904 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-scripts\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.869052 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-scripts\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.869396 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-config-data\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.870748 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.873537 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99758dd3-4691-42ed-a3eb-aead6855e030-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.873648 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/99758dd3-4691-42ed-a3eb-aead6855e030-ceph\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:11 crc kubenswrapper[5016]: I1011 08:34:11.883314 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95dpb\" (UniqueName: \"kubernetes.io/projected/99758dd3-4691-42ed-a3eb-aead6855e030-kube-api-access-95dpb\") pod \"manila-share-share1-0\" (UID: \"99758dd3-4691-42ed-a3eb-aead6855e030\") " pod="openstack/manila-share-share1-0" Oct 11 08:34:12 crc kubenswrapper[5016]: I1011 08:34:12.079085 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 11 08:34:12 crc kubenswrapper[5016]: I1011 08:34:12.702142 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 11 08:34:12 crc kubenswrapper[5016]: W1011 08:34:12.703897 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99758dd3_4691_42ed_a3eb_aead6855e030.slice/crio-725a7bff5ebb899ad388b40fff6f491167fe6466f637618981a3b838c2d6662a WatchSource:0}: Error finding container 725a7bff5ebb899ad388b40fff6f491167fe6466f637618981a3b838c2d6662a: Status 404 returned error can't find the container with id 725a7bff5ebb899ad388b40fff6f491167fe6466f637618981a3b838c2d6662a Oct 11 08:34:13 crc kubenswrapper[5016]: I1011 08:34:13.093924 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:13 crc kubenswrapper[5016]: I1011 08:34:13.094394 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:13 crc kubenswrapper[5016]: I1011 08:34:13.161284 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde3fad2-f81f-4252-90a9-9084a164a3bd" path="/var/lib/kubelet/pods/bde3fad2-f81f-4252-90a9-9084a164a3bd/volumes" Oct 11 08:34:13 crc kubenswrapper[5016]: I1011 08:34:13.682235 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"99758dd3-4691-42ed-a3eb-aead6855e030","Type":"ContainerStarted","Data":"c5cd074a3f47182f8e0f1c7bf1ef63bf6a3b89bcfc78a53127553af2be4f1b42"} Oct 11 08:34:13 crc kubenswrapper[5016]: I1011 08:34:13.682856 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"99758dd3-4691-42ed-a3eb-aead6855e030","Type":"ContainerStarted","Data":"725a7bff5ebb899ad388b40fff6f491167fe6466f637618981a3b838c2d6662a"} Oct 11 08:34:14 crc kubenswrapper[5016]: I1011 08:34:14.175577 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5j9t6" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="registry-server" probeResult="failure" output=< Oct 11 08:34:14 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 08:34:14 crc kubenswrapper[5016]: > Oct 11 08:34:14 crc kubenswrapper[5016]: I1011 08:34:14.702817 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"99758dd3-4691-42ed-a3eb-aead6855e030","Type":"ContainerStarted","Data":"f6839a26d654fb62d7a0a64aac86d5bcdbb7fc6d558be9d143a6756bbb70b247"} Oct 11 08:34:14 crc kubenswrapper[5016]: I1011 08:34:14.734957 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.7349328440000003 podStartE2EDuration="3.734932844s" podCreationTimestamp="2025-10-11 08:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:34:14.725811593 +0000 UTC m=+3242.626267539" watchObservedRunningTime="2025-10-11 08:34:14.734932844 +0000 UTC m=+3242.635388790" Oct 11 08:34:15 crc kubenswrapper[5016]: I1011 08:34:15.888204 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 11 08:34:22 crc kubenswrapper[5016]: I1011 08:34:22.080155 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 11 08:34:23 crc kubenswrapper[5016]: I1011 08:34:23.729745 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 11 08:34:24 crc kubenswrapper[5016]: I1011 08:34:24.161728 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5j9t6" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="registry-server" probeResult="failure" output=< Oct 11 08:34:24 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 08:34:24 crc kubenswrapper[5016]: > Oct 11 08:34:27 crc kubenswrapper[5016]: I1011 08:34:27.420542 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 11 08:34:33 crc kubenswrapper[5016]: I1011 08:34:33.164198 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:33 crc kubenswrapper[5016]: I1011 08:34:33.231842 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:33 crc kubenswrapper[5016]: I1011 08:34:33.614121 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 11 08:34:33 crc kubenswrapper[5016]: I1011 08:34:33.963884 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5j9t6"] Oct 11 08:34:34 crc kubenswrapper[5016]: I1011 08:34:34.917872 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5j9t6" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="registry-server" containerID="cri-o://a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94" gracePeriod=2 Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.478392 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.619028 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srksb\" (UniqueName: \"kubernetes.io/projected/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-kube-api-access-srksb\") pod \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.619760 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-catalog-content\") pod \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.619839 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-utilities\") pod \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\" (UID: \"6c3da8dd-260c-4b29-aa18-b2619ca9c93b\") " Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.620616 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-utilities" (OuterVolumeSpecName: "utilities") pod "6c3da8dd-260c-4b29-aa18-b2619ca9c93b" (UID: "6c3da8dd-260c-4b29-aa18-b2619ca9c93b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.620833 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.628091 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-kube-api-access-srksb" (OuterVolumeSpecName: "kube-api-access-srksb") pod "6c3da8dd-260c-4b29-aa18-b2619ca9c93b" (UID: "6c3da8dd-260c-4b29-aa18-b2619ca9c93b"). InnerVolumeSpecName "kube-api-access-srksb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.699876 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c3da8dd-260c-4b29-aa18-b2619ca9c93b" (UID: "6c3da8dd-260c-4b29-aa18-b2619ca9c93b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.723935 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.723978 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srksb\" (UniqueName: \"kubernetes.io/projected/6c3da8dd-260c-4b29-aa18-b2619ca9c93b-kube-api-access-srksb\") on node \"crc\" DevicePath \"\"" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.929125 5016 generic.go:334] "Generic (PLEG): container finished" podID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerID="a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94" exitCode=0 Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.929175 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5j9t6" event={"ID":"6c3da8dd-260c-4b29-aa18-b2619ca9c93b","Type":"ContainerDied","Data":"a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94"} Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.929213 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5j9t6" event={"ID":"6c3da8dd-260c-4b29-aa18-b2619ca9c93b","Type":"ContainerDied","Data":"05d69a9fc4daead35d654b447d97ac413eca7227bde48399530a1dbcc9d99d53"} Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.929234 5016 scope.go:117] "RemoveContainer" containerID="a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.929253 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5j9t6" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.959387 5016 scope.go:117] "RemoveContainer" containerID="c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564" Oct 11 08:34:35 crc kubenswrapper[5016]: I1011 08:34:35.989296 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5j9t6"] Oct 11 08:34:36 crc kubenswrapper[5016]: I1011 08:34:36.001443 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5j9t6"] Oct 11 08:34:36 crc kubenswrapper[5016]: I1011 08:34:36.027066 5016 scope.go:117] "RemoveContainer" containerID="dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9" Oct 11 08:34:36 crc kubenswrapper[5016]: I1011 08:34:36.062590 5016 scope.go:117] "RemoveContainer" containerID="a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94" Oct 11 08:34:36 crc kubenswrapper[5016]: E1011 08:34:36.063326 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94\": container with ID starting with a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94 not found: ID does not exist" containerID="a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94" Oct 11 08:34:36 crc kubenswrapper[5016]: I1011 08:34:36.063455 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94"} err="failed to get container status \"a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94\": rpc error: code = NotFound desc = could not find container \"a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94\": container with ID starting with a4c89212b564df146326bfb3ef454724c13c216faccc6372b9df2202f5bc9d94 not found: ID does not exist" Oct 11 08:34:36 crc kubenswrapper[5016]: I1011 08:34:36.063583 5016 scope.go:117] "RemoveContainer" containerID="c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564" Oct 11 08:34:36 crc kubenswrapper[5016]: E1011 08:34:36.064625 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564\": container with ID starting with c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564 not found: ID does not exist" containerID="c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564" Oct 11 08:34:36 crc kubenswrapper[5016]: I1011 08:34:36.064728 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564"} err="failed to get container status \"c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564\": rpc error: code = NotFound desc = could not find container \"c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564\": container with ID starting with c3fa23d1a62263a1e8c928da16f720699d102626164f1ba63e165469508cd564 not found: ID does not exist" Oct 11 08:34:36 crc kubenswrapper[5016]: I1011 08:34:36.064774 5016 scope.go:117] "RemoveContainer" containerID="dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9" Oct 11 08:34:36 crc kubenswrapper[5016]: E1011 08:34:36.065414 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9\": container with ID starting with dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9 not found: ID does not exist" containerID="dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9" Oct 11 08:34:36 crc kubenswrapper[5016]: I1011 08:34:36.065491 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9"} err="failed to get container status \"dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9\": rpc error: code = NotFound desc = could not find container \"dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9\": container with ID starting with dd26765329b32b6edbf3f7a22bb1d86c67d9fc453deadc71e944118e6b3c0ad9 not found: ID does not exist" Oct 11 08:34:37 crc kubenswrapper[5016]: I1011 08:34:37.150147 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" path="/var/lib/kubelet/pods/6c3da8dd-260c-4b29-aa18-b2619ca9c93b/volumes" Oct 11 08:34:41 crc kubenswrapper[5016]: I1011 08:34:41.355457 5016 scope.go:117] "RemoveContainer" containerID="24fc6c4453b265fd03d88a0a5b1646dc1744c36275020280067fe86ea048dded" Oct 11 08:34:41 crc kubenswrapper[5016]: I1011 08:34:41.399009 5016 scope.go:117] "RemoveContainer" containerID="92024d431f4574f45061d05b12c61dbf348bf5828cce0c4008faabad01e42c65" Oct 11 08:34:41 crc kubenswrapper[5016]: I1011 08:34:41.438390 5016 scope.go:117] "RemoveContainer" containerID="3a35cb7c8bba2d3bf33f49611885969e9937c21cb3de207e7b3d18ca25f72ffc" Oct 11 08:34:41 crc kubenswrapper[5016]: I1011 08:34:41.508135 5016 scope.go:117] "RemoveContainer" containerID="704a25afd183f6a3a963cac10809ee59a9ddedbca1e75e2cf35f2279af182b8c" Oct 11 08:35:07 crc kubenswrapper[5016]: I1011 08:35:07.122872 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:35:07 crc kubenswrapper[5016]: I1011 08:35:07.123869 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.376494 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv"] Oct 11 08:35:19 crc kubenswrapper[5016]: E1011 08:35:19.377462 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="registry-server" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.377475 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="registry-server" Oct 11 08:35:19 crc kubenswrapper[5016]: E1011 08:35:19.377495 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="extract-content" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.377501 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="extract-content" Oct 11 08:35:19 crc kubenswrapper[5016]: E1011 08:35:19.377519 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="extract-utilities" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.377526 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="extract-utilities" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.377753 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3da8dd-260c-4b29-aa18-b2619ca9c93b" containerName="registry-server" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.381955 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.396435 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7vxk\" (UniqueName: \"kubernetes.io/projected/7ccff651-e44c-47d6-85fc-7a34a992c1f5-kube-api-access-w7vxk\") pod \"openstack-operator-controller-operator-6dc495b7-bj7dv\" (UID: \"7ccff651-e44c-47d6-85fc-7a34a992c1f5\") " pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.418794 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv"] Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.499392 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7vxk\" (UniqueName: \"kubernetes.io/projected/7ccff651-e44c-47d6-85fc-7a34a992c1f5-kube-api-access-w7vxk\") pod \"openstack-operator-controller-operator-6dc495b7-bj7dv\" (UID: \"7ccff651-e44c-47d6-85fc-7a34a992c1f5\") " pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.521600 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7vxk\" (UniqueName: \"kubernetes.io/projected/7ccff651-e44c-47d6-85fc-7a34a992c1f5-kube-api-access-w7vxk\") pod \"openstack-operator-controller-operator-6dc495b7-bj7dv\" (UID: \"7ccff651-e44c-47d6-85fc-7a34a992c1f5\") " pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" Oct 11 08:35:19 crc kubenswrapper[5016]: I1011 08:35:19.709087 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" Oct 11 08:35:20 crc kubenswrapper[5016]: I1011 08:35:20.301474 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv"] Oct 11 08:35:20 crc kubenswrapper[5016]: I1011 08:35:20.502601 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" event={"ID":"7ccff651-e44c-47d6-85fc-7a34a992c1f5","Type":"ContainerStarted","Data":"4eb08f56598fc1bc95cc79df42bb56ac3681ccfe2ff93cb012c6b67bf7aa38e9"} Oct 11 08:35:21 crc kubenswrapper[5016]: I1011 08:35:21.514551 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" event={"ID":"7ccff651-e44c-47d6-85fc-7a34a992c1f5","Type":"ContainerStarted","Data":"5cdaa9e587fe8e6457eb2b759db99d31e9290469f2e35d2394feae5d62f2bb8d"} Oct 11 08:35:21 crc kubenswrapper[5016]: I1011 08:35:21.515179 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" event={"ID":"7ccff651-e44c-47d6-85fc-7a34a992c1f5","Type":"ContainerStarted","Data":"662c7f87551ccd0ef9f3603ade712ea9dc2b6423ebed36e5e9bbbe8f74173329"} Oct 11 08:35:21 crc kubenswrapper[5016]: I1011 08:35:21.515203 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" Oct 11 08:35:21 crc kubenswrapper[5016]: I1011 08:35:21.563287 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" podStartSLOduration=2.563264433 podStartE2EDuration="2.563264433s" podCreationTimestamp="2025-10-11 08:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 08:35:21.553948086 +0000 UTC m=+3309.454404072" watchObservedRunningTime="2025-10-11 08:35:21.563264433 +0000 UTC m=+3309.463720379" Oct 11 08:35:29 crc kubenswrapper[5016]: I1011 08:35:29.713951 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6dc495b7-bj7dv" Oct 11 08:35:29 crc kubenswrapper[5016]: I1011 08:35:29.880648 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql"] Oct 11 08:35:29 crc kubenswrapper[5016]: I1011 08:35:29.881821 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" podUID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerName="operator" containerID="cri-o://1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831" gracePeriod=10 Oct 11 08:35:29 crc kubenswrapper[5016]: I1011 08:35:29.881878 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" podUID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerName="kube-rbac-proxy" containerID="cri-o://80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce" gracePeriod=10 Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.475977 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.606284 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt26n\" (UniqueName: \"kubernetes.io/projected/35cdb9c9-3cf9-4025-b95e-7d62879eb20a-kube-api-access-jt26n\") pod \"35cdb9c9-3cf9-4025-b95e-7d62879eb20a\" (UID: \"35cdb9c9-3cf9-4025-b95e-7d62879eb20a\") " Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.614952 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35cdb9c9-3cf9-4025-b95e-7d62879eb20a-kube-api-access-jt26n" (OuterVolumeSpecName: "kube-api-access-jt26n") pod "35cdb9c9-3cf9-4025-b95e-7d62879eb20a" (UID: "35cdb9c9-3cf9-4025-b95e-7d62879eb20a"). InnerVolumeSpecName "kube-api-access-jt26n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.668214 5016 generic.go:334] "Generic (PLEG): container finished" podID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerID="80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce" exitCode=0 Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.668266 5016 generic.go:334] "Generic (PLEG): container finished" podID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerID="1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831" exitCode=0 Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.668296 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" event={"ID":"35cdb9c9-3cf9-4025-b95e-7d62879eb20a","Type":"ContainerDied","Data":"80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce"} Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.668336 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" event={"ID":"35cdb9c9-3cf9-4025-b95e-7d62879eb20a","Type":"ContainerDied","Data":"1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831"} Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.668350 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" event={"ID":"35cdb9c9-3cf9-4025-b95e-7d62879eb20a","Type":"ContainerDied","Data":"e87ba0f03b9aa68b7045bc5b27987895ec9c45f962e7c2684efe1f3eed474031"} Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.668387 5016 scope.go:117] "RemoveContainer" containerID="80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.668720 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.711849 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt26n\" (UniqueName: \"kubernetes.io/projected/35cdb9c9-3cf9-4025-b95e-7d62879eb20a-kube-api-access-jt26n\") on node \"crc\" DevicePath \"\"" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.723571 5016 scope.go:117] "RemoveContainer" containerID="1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.738872 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql"] Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.749378 5016 scope.go:117] "RemoveContainer" containerID="80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce" Oct 11 08:35:30 crc kubenswrapper[5016]: E1011 08:35:30.749915 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce\": container with ID starting with 80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce not found: ID does not exist" containerID="80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.749985 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce"} err="failed to get container status \"80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce\": rpc error: code = NotFound desc = could not find container \"80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce\": container with ID starting with 80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce not found: ID does not exist" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.750021 5016 scope.go:117] "RemoveContainer" containerID="1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.751217 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-688d597459-gk6ql"] Oct 11 08:35:30 crc kubenswrapper[5016]: E1011 08:35:30.751496 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831\": container with ID starting with 1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831 not found: ID does not exist" containerID="1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.751532 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831"} err="failed to get container status \"1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831\": rpc error: code = NotFound desc = could not find container \"1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831\": container with ID starting with 1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831 not found: ID does not exist" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.751559 5016 scope.go:117] "RemoveContainer" containerID="80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.751760 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce"} err="failed to get container status \"80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce\": rpc error: code = NotFound desc = could not find container \"80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce\": container with ID starting with 80731d8e01fa1217ec70a701b363d8985591f3769245279011085b145ccba7ce not found: ID does not exist" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.751781 5016 scope.go:117] "RemoveContainer" containerID="1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831" Oct 11 08:35:30 crc kubenswrapper[5016]: I1011 08:35:30.751965 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831"} err="failed to get container status \"1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831\": rpc error: code = NotFound desc = could not find container \"1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831\": container with ID starting with 1b1b1ea4ba7cd64eb3f58d39ffa1722299389b5871ffc33fdbd43f2cfe7fc831 not found: ID does not exist" Oct 11 08:35:31 crc kubenswrapper[5016]: I1011 08:35:31.147630 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" path="/var/lib/kubelet/pods/35cdb9c9-3cf9-4025-b95e-7d62879eb20a/volumes" Oct 11 08:35:37 crc kubenswrapper[5016]: I1011 08:35:37.122451 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:35:37 crc kubenswrapper[5016]: I1011 08:35:37.123446 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:36:05 crc kubenswrapper[5016]: I1011 08:36:05.830846 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-556f69b4d6-whv65"] Oct 11 08:36:05 crc kubenswrapper[5016]: E1011 08:36:05.832513 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerName="kube-rbac-proxy" Oct 11 08:36:05 crc kubenswrapper[5016]: I1011 08:36:05.832539 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerName="kube-rbac-proxy" Oct 11 08:36:05 crc kubenswrapper[5016]: E1011 08:36:05.832588 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerName="operator" Oct 11 08:36:05 crc kubenswrapper[5016]: I1011 08:36:05.832601 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerName="operator" Oct 11 08:36:05 crc kubenswrapper[5016]: I1011 08:36:05.832980 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerName="kube-rbac-proxy" Oct 11 08:36:05 crc kubenswrapper[5016]: I1011 08:36:05.833006 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="35cdb9c9-3cf9-4025-b95e-7d62879eb20a" containerName="operator" Oct 11 08:36:05 crc kubenswrapper[5016]: I1011 08:36:05.834913 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" Oct 11 08:36:05 crc kubenswrapper[5016]: I1011 08:36:05.840393 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-556f69b4d6-whv65"] Oct 11 08:36:05 crc kubenswrapper[5016]: I1011 08:36:05.978466 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzpmg\" (UniqueName: \"kubernetes.io/projected/fe5ced7a-fe54-4272-8b7a-4d576fc78f63-kube-api-access-kzpmg\") pod \"test-operator-controller-manager-556f69b4d6-whv65\" (UID: \"fe5ced7a-fe54-4272-8b7a-4d576fc78f63\") " pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" Oct 11 08:36:06 crc kubenswrapper[5016]: I1011 08:36:06.082562 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzpmg\" (UniqueName: \"kubernetes.io/projected/fe5ced7a-fe54-4272-8b7a-4d576fc78f63-kube-api-access-kzpmg\") pod \"test-operator-controller-manager-556f69b4d6-whv65\" (UID: \"fe5ced7a-fe54-4272-8b7a-4d576fc78f63\") " pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" Oct 11 08:36:06 crc kubenswrapper[5016]: I1011 08:36:06.117247 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzpmg\" (UniqueName: \"kubernetes.io/projected/fe5ced7a-fe54-4272-8b7a-4d576fc78f63-kube-api-access-kzpmg\") pod \"test-operator-controller-manager-556f69b4d6-whv65\" (UID: \"fe5ced7a-fe54-4272-8b7a-4d576fc78f63\") " pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" Oct 11 08:36:06 crc kubenswrapper[5016]: I1011 08:36:06.176731 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" Oct 11 08:36:06 crc kubenswrapper[5016]: I1011 08:36:06.540537 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-556f69b4d6-whv65"] Oct 11 08:36:07 crc kubenswrapper[5016]: I1011 08:36:07.121912 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:36:07 crc kubenswrapper[5016]: I1011 08:36:07.121985 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:36:07 crc kubenswrapper[5016]: I1011 08:36:07.122046 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:36:07 crc kubenswrapper[5016]: I1011 08:36:07.123049 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"62b966693cde380525833f9965a580c59298058b4e14e614272a3bd58f638ea3"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:36:07 crc kubenswrapper[5016]: I1011 08:36:07.123112 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://62b966693cde380525833f9965a580c59298058b4e14e614272a3bd58f638ea3" gracePeriod=600 Oct 11 08:36:07 crc kubenswrapper[5016]: I1011 08:36:07.152398 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" event={"ID":"fe5ced7a-fe54-4272-8b7a-4d576fc78f63","Type":"ContainerStarted","Data":"7f28b4901040f24d37ab166c39c5052b1ffb2ac69801cbbf7e219744bf84982b"} Oct 11 08:36:08 crc kubenswrapper[5016]: I1011 08:36:08.169030 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="62b966693cde380525833f9965a580c59298058b4e14e614272a3bd58f638ea3" exitCode=0 Oct 11 08:36:08 crc kubenswrapper[5016]: I1011 08:36:08.169120 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"62b966693cde380525833f9965a580c59298058b4e14e614272a3bd58f638ea3"} Oct 11 08:36:08 crc kubenswrapper[5016]: I1011 08:36:08.170096 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e"} Oct 11 08:36:08 crc kubenswrapper[5016]: I1011 08:36:08.170130 5016 scope.go:117] "RemoveContainer" containerID="58fa9ee906c89bbcbc2a251594ffde7881029d6103665eb06e841139350eca72" Oct 11 08:36:08 crc kubenswrapper[5016]: I1011 08:36:08.174184 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" event={"ID":"fe5ced7a-fe54-4272-8b7a-4d576fc78f63","Type":"ContainerStarted","Data":"eca88f66d9820c1b3d3242e42afa050f9c59082dfcdcaea291eef9cef5974755"} Oct 11 08:36:09 crc kubenswrapper[5016]: I1011 08:36:09.190718 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" event={"ID":"fe5ced7a-fe54-4272-8b7a-4d576fc78f63","Type":"ContainerStarted","Data":"a6a866c26cabb961ae54ed0a7067748b1d99feae5ff00fd6fd7b1fa24ccd2d64"} Oct 11 08:36:09 crc kubenswrapper[5016]: I1011 08:36:09.191615 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" Oct 11 08:36:09 crc kubenswrapper[5016]: I1011 08:36:09.222370 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" podStartSLOduration=2.96339995 podStartE2EDuration="4.222275616s" podCreationTimestamp="2025-10-11 08:36:05 +0000 UTC" firstStartedPulling="2025-10-11 08:36:06.555191218 +0000 UTC m=+3354.455647184" lastFinishedPulling="2025-10-11 08:36:07.814066864 +0000 UTC m=+3355.714522850" observedRunningTime="2025-10-11 08:36:09.216385589 +0000 UTC m=+3357.116841565" watchObservedRunningTime="2025-10-11 08:36:09.222275616 +0000 UTC m=+3357.122731602" Oct 11 08:36:16 crc kubenswrapper[5016]: I1011 08:36:16.184372 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-556f69b4d6-whv65" Oct 11 08:36:16 crc kubenswrapper[5016]: I1011 08:36:16.282474 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9"] Oct 11 08:36:16 crc kubenswrapper[5016]: I1011 08:36:16.282851 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerName="kube-rbac-proxy" containerID="cri-o://98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d" gracePeriod=10 Oct 11 08:36:16 crc kubenswrapper[5016]: I1011 08:36:16.283584 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerName="manager" containerID="cri-o://51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4" gracePeriod=10 Oct 11 08:36:16 crc kubenswrapper[5016]: I1011 08:36:16.995738 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.077048 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnx7l\" (UniqueName: \"kubernetes.io/projected/cfabcb8e-bad0-4179-81d3-0d6c2a874793-kube-api-access-wnx7l\") pod \"cfabcb8e-bad0-4179-81d3-0d6c2a874793\" (UID: \"cfabcb8e-bad0-4179-81d3-0d6c2a874793\") " Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.084231 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfabcb8e-bad0-4179-81d3-0d6c2a874793-kube-api-access-wnx7l" (OuterVolumeSpecName: "kube-api-access-wnx7l") pod "cfabcb8e-bad0-4179-81d3-0d6c2a874793" (UID: "cfabcb8e-bad0-4179-81d3-0d6c2a874793"). InnerVolumeSpecName "kube-api-access-wnx7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.181159 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnx7l\" (UniqueName: \"kubernetes.io/projected/cfabcb8e-bad0-4179-81d3-0d6c2a874793-kube-api-access-wnx7l\") on node \"crc\" DevicePath \"\"" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.283581 5016 generic.go:334] "Generic (PLEG): container finished" podID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerID="51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4" exitCode=0 Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.283617 5016 generic.go:334] "Generic (PLEG): container finished" podID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerID="98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d" exitCode=0 Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.283643 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" event={"ID":"cfabcb8e-bad0-4179-81d3-0d6c2a874793","Type":"ContainerDied","Data":"51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4"} Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.283747 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" event={"ID":"cfabcb8e-bad0-4179-81d3-0d6c2a874793","Type":"ContainerDied","Data":"98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d"} Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.283758 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.283769 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" event={"ID":"cfabcb8e-bad0-4179-81d3-0d6c2a874793","Type":"ContainerDied","Data":"b5f048fe6a6f08ac809c49ba01831c9421492ebd20f77561bb15b3ecae68b2b6"} Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.283781 5016 scope.go:117] "RemoveContainer" containerID="51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.318514 5016 scope.go:117] "RemoveContainer" containerID="98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.320060 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9"] Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.331014 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9"] Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.339048 5016 scope.go:117] "RemoveContainer" containerID="51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4" Oct 11 08:36:17 crc kubenswrapper[5016]: E1011 08:36:17.339999 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4\": container with ID starting with 51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4 not found: ID does not exist" containerID="51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.340059 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4"} err="failed to get container status \"51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4\": rpc error: code = NotFound desc = could not find container \"51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4\": container with ID starting with 51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4 not found: ID does not exist" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.340084 5016 scope.go:117] "RemoveContainer" containerID="98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d" Oct 11 08:36:17 crc kubenswrapper[5016]: E1011 08:36:17.340432 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d\": container with ID starting with 98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d not found: ID does not exist" containerID="98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.340516 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d"} err="failed to get container status \"98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d\": rpc error: code = NotFound desc = could not find container \"98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d\": container with ID starting with 98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d not found: ID does not exist" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.340570 5016 scope.go:117] "RemoveContainer" containerID="51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.342426 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4"} err="failed to get container status \"51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4\": rpc error: code = NotFound desc = could not find container \"51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4\": container with ID starting with 51109f4886466033ee08e292dfe36c936c3eec6948175c8eea6d009444df85e4 not found: ID does not exist" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.342470 5016 scope.go:117] "RemoveContainer" containerID="98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.343601 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d"} err="failed to get container status \"98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d\": rpc error: code = NotFound desc = could not find container \"98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d\": container with ID starting with 98c1dafee349c934d6182f95d753d362f344d90eb8adc53933a4bc38fbeb223d not found: ID does not exist" Oct 11 08:36:17 crc kubenswrapper[5016]: I1011 08:36:17.828927 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5458f77c4-tdbg9" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 08:36:19 crc kubenswrapper[5016]: I1011 08:36:19.153762 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" path="/var/lib/kubelet/pods/cfabcb8e-bad0-4179-81d3-0d6c2a874793/volumes" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.344367 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Oct 11 08:38:21 crc kubenswrapper[5016]: E1011 08:38:21.345685 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerName="manager" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.345704 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerName="manager" Oct 11 08:38:21 crc kubenswrapper[5016]: E1011 08:38:21.345743 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerName="kube-rbac-proxy" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.345755 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerName="kube-rbac-proxy" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.346110 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerName="kube-rbac-proxy" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.346131 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfabcb8e-bad0-4179-81d3-0d6c2a874793" containerName="manager" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.347237 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.354042 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.379023 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.379083 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.379696 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.390927 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.391030 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kgfkw" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.391297 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.391445 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.481469 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.481752 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.481794 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.481813 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.481920 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.481962 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.482007 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.482045 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqvf8\" (UniqueName: \"kubernetes.io/projected/19930010-7a7e-4c76-a81e-85e049ff1da4-kube-api-access-gqvf8\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.482328 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.482406 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.483329 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.484084 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.490031 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.585620 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.585752 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqvf8\" (UniqueName: \"kubernetes.io/projected/19930010-7a7e-4c76-a81e-85e049ff1da4-kube-api-access-gqvf8\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.585887 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.585982 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.586037 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.586101 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.586138 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.586337 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.586490 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.587099 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.592524 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.593313 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.602082 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.605348 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqvf8\" (UniqueName: \"kubernetes.io/projected/19930010-7a7e-4c76-a81e-85e049ff1da4-kube-api-access-gqvf8\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.617532 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:21 crc kubenswrapper[5016]: I1011 08:38:21.703835 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Oct 11 08:38:22 crc kubenswrapper[5016]: I1011 08:38:22.285360 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Oct 11 08:38:22 crc kubenswrapper[5016]: I1011 08:38:22.859218 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"19930010-7a7e-4c76-a81e-85e049ff1da4","Type":"ContainerStarted","Data":"d784512daa6d309669323858354c833f4d25ce4f8c096eff33a2693b6b37a175"} Oct 11 08:38:37 crc kubenswrapper[5016]: I1011 08:38:37.122395 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:38:37 crc kubenswrapper[5016]: I1011 08:38:37.123264 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:39:01 crc kubenswrapper[5016]: E1011 08:39:01.720422 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 11 08:39:01 crc kubenswrapper[5016]: E1011 08:39:01.721457 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqvf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-full_openstack(19930010-7a7e-4c76-a81e-85e049ff1da4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 08:39:01 crc kubenswrapper[5016]: E1011 08:39:01.722818 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="19930010-7a7e-4c76-a81e-85e049ff1da4" Oct 11 08:39:02 crc kubenswrapper[5016]: E1011 08:39:02.295011 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="19930010-7a7e-4c76-a81e-85e049ff1da4" Oct 11 08:39:07 crc kubenswrapper[5016]: I1011 08:39:07.122000 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:39:07 crc kubenswrapper[5016]: I1011 08:39:07.123600 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:39:15 crc kubenswrapper[5016]: I1011 08:39:15.138540 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 08:39:15 crc kubenswrapper[5016]: I1011 08:39:15.614269 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 11 08:39:17 crc kubenswrapper[5016]: I1011 08:39:17.496373 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"19930010-7a7e-4c76-a81e-85e049ff1da4","Type":"ContainerStarted","Data":"07a0f3da5771f39f1842974593d044bc792163238e72657d8f45094d41ace8af"} Oct 11 08:39:17 crc kubenswrapper[5016]: I1011 08:39:17.535840 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-full" podStartSLOduration=4.218388066 podStartE2EDuration="57.535812789s" podCreationTimestamp="2025-10-11 08:38:20 +0000 UTC" firstStartedPulling="2025-10-11 08:38:22.290963496 +0000 UTC m=+3490.191419442" lastFinishedPulling="2025-10-11 08:39:15.608388179 +0000 UTC m=+3543.508844165" observedRunningTime="2025-10-11 08:39:17.524302535 +0000 UTC m=+3545.424758511" watchObservedRunningTime="2025-10-11 08:39:17.535812789 +0000 UTC m=+3545.436268745" Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.740572 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m5kwv"] Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.751496 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.769710 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m5kwv"] Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.873547 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-utilities\") pod \"community-operators-m5kwv\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.873682 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-catalog-content\") pod \"community-operators-m5kwv\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.873774 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncgt7\" (UniqueName: \"kubernetes.io/projected/6c59ffe0-0493-4e67-b791-e80b89d57695-kube-api-access-ncgt7\") pod \"community-operators-m5kwv\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.976345 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-catalog-content\") pod \"community-operators-m5kwv\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.976398 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncgt7\" (UniqueName: \"kubernetes.io/projected/6c59ffe0-0493-4e67-b791-e80b89d57695-kube-api-access-ncgt7\") pod \"community-operators-m5kwv\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.976538 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-utilities\") pod \"community-operators-m5kwv\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.977274 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-utilities\") pod \"community-operators-m5kwv\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:31 crc kubenswrapper[5016]: I1011 08:39:31.977499 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-catalog-content\") pod \"community-operators-m5kwv\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:32 crc kubenswrapper[5016]: I1011 08:39:32.007691 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncgt7\" (UniqueName: \"kubernetes.io/projected/6c59ffe0-0493-4e67-b791-e80b89d57695-kube-api-access-ncgt7\") pod \"community-operators-m5kwv\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:32 crc kubenswrapper[5016]: I1011 08:39:32.091365 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:32 crc kubenswrapper[5016]: I1011 08:39:32.675817 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m5kwv"] Oct 11 08:39:33 crc kubenswrapper[5016]: I1011 08:39:33.708361 5016 generic.go:334] "Generic (PLEG): container finished" podID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerID="52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7" exitCode=0 Oct 11 08:39:33 crc kubenswrapper[5016]: I1011 08:39:33.708801 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5kwv" event={"ID":"6c59ffe0-0493-4e67-b791-e80b89d57695","Type":"ContainerDied","Data":"52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7"} Oct 11 08:39:33 crc kubenswrapper[5016]: I1011 08:39:33.708843 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5kwv" event={"ID":"6c59ffe0-0493-4e67-b791-e80b89d57695","Type":"ContainerStarted","Data":"2f6854f8bdea8dfcc78fe03b7fb1f2880f04896c9725e5a28594140ad0295335"} Oct 11 08:39:34 crc kubenswrapper[5016]: I1011 08:39:34.727131 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5kwv" event={"ID":"6c59ffe0-0493-4e67-b791-e80b89d57695","Type":"ContainerStarted","Data":"f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc"} Oct 11 08:39:35 crc kubenswrapper[5016]: I1011 08:39:35.745620 5016 generic.go:334] "Generic (PLEG): container finished" podID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerID="f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc" exitCode=0 Oct 11 08:39:35 crc kubenswrapper[5016]: I1011 08:39:35.745749 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5kwv" event={"ID":"6c59ffe0-0493-4e67-b791-e80b89d57695","Type":"ContainerDied","Data":"f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc"} Oct 11 08:39:36 crc kubenswrapper[5016]: I1011 08:39:36.765552 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5kwv" event={"ID":"6c59ffe0-0493-4e67-b791-e80b89d57695","Type":"ContainerStarted","Data":"7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa"} Oct 11 08:39:36 crc kubenswrapper[5016]: I1011 08:39:36.792120 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m5kwv" podStartSLOduration=3.253984549 podStartE2EDuration="5.792090391s" podCreationTimestamp="2025-10-11 08:39:31 +0000 UTC" firstStartedPulling="2025-10-11 08:39:33.711428351 +0000 UTC m=+3561.611884337" lastFinishedPulling="2025-10-11 08:39:36.249534223 +0000 UTC m=+3564.149990179" observedRunningTime="2025-10-11 08:39:36.787798067 +0000 UTC m=+3564.688254053" watchObservedRunningTime="2025-10-11 08:39:36.792090391 +0000 UTC m=+3564.692546387" Oct 11 08:39:37 crc kubenswrapper[5016]: I1011 08:39:37.122301 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:39:37 crc kubenswrapper[5016]: I1011 08:39:37.122416 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:39:37 crc kubenswrapper[5016]: I1011 08:39:37.122492 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:39:37 crc kubenswrapper[5016]: I1011 08:39:37.123822 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:39:37 crc kubenswrapper[5016]: I1011 08:39:37.124002 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" gracePeriod=600 Oct 11 08:39:37 crc kubenswrapper[5016]: E1011 08:39:37.257258 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:39:37 crc kubenswrapper[5016]: I1011 08:39:37.780538 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" exitCode=0 Oct 11 08:39:37 crc kubenswrapper[5016]: I1011 08:39:37.780645 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e"} Oct 11 08:39:37 crc kubenswrapper[5016]: I1011 08:39:37.781613 5016 scope.go:117] "RemoveContainer" containerID="62b966693cde380525833f9965a580c59298058b4e14e614272a3bd58f638ea3" Oct 11 08:39:37 crc kubenswrapper[5016]: I1011 08:39:37.782558 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:39:37 crc kubenswrapper[5016]: E1011 08:39:37.783037 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:39:42 crc kubenswrapper[5016]: I1011 08:39:42.093311 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:42 crc kubenswrapper[5016]: I1011 08:39:42.094168 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:42 crc kubenswrapper[5016]: I1011 08:39:42.178240 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:42 crc kubenswrapper[5016]: I1011 08:39:42.952126 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:43 crc kubenswrapper[5016]: I1011 08:39:43.025779 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m5kwv"] Oct 11 08:39:44 crc kubenswrapper[5016]: I1011 08:39:44.891970 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m5kwv" podUID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerName="registry-server" containerID="cri-o://7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa" gracePeriod=2 Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.433570 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.459584 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-utilities\") pod \"6c59ffe0-0493-4e67-b791-e80b89d57695\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.459762 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncgt7\" (UniqueName: \"kubernetes.io/projected/6c59ffe0-0493-4e67-b791-e80b89d57695-kube-api-access-ncgt7\") pod \"6c59ffe0-0493-4e67-b791-e80b89d57695\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.459914 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-catalog-content\") pod \"6c59ffe0-0493-4e67-b791-e80b89d57695\" (UID: \"6c59ffe0-0493-4e67-b791-e80b89d57695\") " Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.471785 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-utilities" (OuterVolumeSpecName: "utilities") pod "6c59ffe0-0493-4e67-b791-e80b89d57695" (UID: "6c59ffe0-0493-4e67-b791-e80b89d57695"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.488607 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c59ffe0-0493-4e67-b791-e80b89d57695-kube-api-access-ncgt7" (OuterVolumeSpecName: "kube-api-access-ncgt7") pod "6c59ffe0-0493-4e67-b791-e80b89d57695" (UID: "6c59ffe0-0493-4e67-b791-e80b89d57695"). InnerVolumeSpecName "kube-api-access-ncgt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.519309 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c59ffe0-0493-4e67-b791-e80b89d57695" (UID: "6c59ffe0-0493-4e67-b791-e80b89d57695"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.562753 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.562794 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncgt7\" (UniqueName: \"kubernetes.io/projected/6c59ffe0-0493-4e67-b791-e80b89d57695-kube-api-access-ncgt7\") on node \"crc\" DevicePath \"\"" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.562813 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c59ffe0-0493-4e67-b791-e80b89d57695-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.907311 5016 generic.go:334] "Generic (PLEG): container finished" podID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerID="7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa" exitCode=0 Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.907384 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5kwv" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.907388 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5kwv" event={"ID":"6c59ffe0-0493-4e67-b791-e80b89d57695","Type":"ContainerDied","Data":"7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa"} Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.907801 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5kwv" event={"ID":"6c59ffe0-0493-4e67-b791-e80b89d57695","Type":"ContainerDied","Data":"2f6854f8bdea8dfcc78fe03b7fb1f2880f04896c9725e5a28594140ad0295335"} Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.907824 5016 scope.go:117] "RemoveContainer" containerID="7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.947118 5016 scope.go:117] "RemoveContainer" containerID="f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.965802 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m5kwv"] Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.981133 5016 scope.go:117] "RemoveContainer" containerID="52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7" Oct 11 08:39:45 crc kubenswrapper[5016]: I1011 08:39:45.984571 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m5kwv"] Oct 11 08:39:46 crc kubenswrapper[5016]: I1011 08:39:46.047621 5016 scope.go:117] "RemoveContainer" containerID="7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa" Oct 11 08:39:46 crc kubenswrapper[5016]: E1011 08:39:46.048096 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa\": container with ID starting with 7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa not found: ID does not exist" containerID="7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa" Oct 11 08:39:46 crc kubenswrapper[5016]: I1011 08:39:46.048160 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa"} err="failed to get container status \"7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa\": rpc error: code = NotFound desc = could not find container \"7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa\": container with ID starting with 7f73b66351a1ca759af95f531714680597855aafa335ee52b02ec4a9526e4bfa not found: ID does not exist" Oct 11 08:39:46 crc kubenswrapper[5016]: I1011 08:39:46.048195 5016 scope.go:117] "RemoveContainer" containerID="f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc" Oct 11 08:39:46 crc kubenswrapper[5016]: E1011 08:39:46.048462 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc\": container with ID starting with f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc not found: ID does not exist" containerID="f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc" Oct 11 08:39:46 crc kubenswrapper[5016]: I1011 08:39:46.048497 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc"} err="failed to get container status \"f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc\": rpc error: code = NotFound desc = could not find container \"f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc\": container with ID starting with f069c630840d673498136bb06dda381037a9acb14372823cfb41aa29f17e1dcc not found: ID does not exist" Oct 11 08:39:46 crc kubenswrapper[5016]: I1011 08:39:46.048515 5016 scope.go:117] "RemoveContainer" containerID="52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7" Oct 11 08:39:46 crc kubenswrapper[5016]: E1011 08:39:46.048743 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7\": container with ID starting with 52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7 not found: ID does not exist" containerID="52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7" Oct 11 08:39:46 crc kubenswrapper[5016]: I1011 08:39:46.048768 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7"} err="failed to get container status \"52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7\": rpc error: code = NotFound desc = could not find container \"52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7\": container with ID starting with 52a46e430411bdaf3cfb37f67a399b3f3db7931e9b175f23a305d9b003a947c7 not found: ID does not exist" Oct 11 08:39:47 crc kubenswrapper[5016]: I1011 08:39:47.146944 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c59ffe0-0493-4e67-b791-e80b89d57695" path="/var/lib/kubelet/pods/6c59ffe0-0493-4e67-b791-e80b89d57695/volumes" Oct 11 08:39:52 crc kubenswrapper[5016]: I1011 08:39:52.132982 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:39:52 crc kubenswrapper[5016]: E1011 08:39:52.134228 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:40:03 crc kubenswrapper[5016]: I1011 08:40:03.152648 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:40:03 crc kubenswrapper[5016]: E1011 08:40:03.154047 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:40:15 crc kubenswrapper[5016]: I1011 08:40:15.134111 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:40:15 crc kubenswrapper[5016]: E1011 08:40:15.135360 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:40:30 crc kubenswrapper[5016]: I1011 08:40:30.134482 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:40:30 crc kubenswrapper[5016]: E1011 08:40:30.137114 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:40:44 crc kubenswrapper[5016]: I1011 08:40:44.133270 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:40:44 crc kubenswrapper[5016]: E1011 08:40:44.134514 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:40:55 crc kubenswrapper[5016]: I1011 08:40:55.133518 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:40:55 crc kubenswrapper[5016]: E1011 08:40:55.134646 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:41:10 crc kubenswrapper[5016]: I1011 08:41:10.134206 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:41:10 crc kubenswrapper[5016]: E1011 08:41:10.135261 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:41:23 crc kubenswrapper[5016]: I1011 08:41:23.140021 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:41:23 crc kubenswrapper[5016]: E1011 08:41:23.140733 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:41:36 crc kubenswrapper[5016]: I1011 08:41:36.133571 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:41:36 crc kubenswrapper[5016]: E1011 08:41:36.134940 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:41:49 crc kubenswrapper[5016]: I1011 08:41:49.133985 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:41:49 crc kubenswrapper[5016]: E1011 08:41:49.134983 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:42:01 crc kubenswrapper[5016]: I1011 08:42:01.134282 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:42:01 crc kubenswrapper[5016]: E1011 08:42:01.135456 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:42:12 crc kubenswrapper[5016]: I1011 08:42:12.134373 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:42:12 crc kubenswrapper[5016]: E1011 08:42:12.135423 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.814002 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gst8b"] Oct 11 08:42:20 crc kubenswrapper[5016]: E1011 08:42:20.815455 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerName="registry-server" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.815482 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerName="registry-server" Oct 11 08:42:20 crc kubenswrapper[5016]: E1011 08:42:20.815524 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerName="extract-content" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.815537 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerName="extract-content" Oct 11 08:42:20 crc kubenswrapper[5016]: E1011 08:42:20.815570 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerName="extract-utilities" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.815585 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerName="extract-utilities" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.816041 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c59ffe0-0493-4e67-b791-e80b89d57695" containerName="registry-server" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.818561 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.848118 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gst8b"] Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.851136 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnmvv\" (UniqueName: \"kubernetes.io/projected/e40bb01c-eacd-42b4-951d-ddee56bd532d-kube-api-access-wnmvv\") pod \"redhat-marketplace-gst8b\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.851358 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-utilities\") pod \"redhat-marketplace-gst8b\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.851383 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-catalog-content\") pod \"redhat-marketplace-gst8b\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.954607 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnmvv\" (UniqueName: \"kubernetes.io/projected/e40bb01c-eacd-42b4-951d-ddee56bd532d-kube-api-access-wnmvv\") pod \"redhat-marketplace-gst8b\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.954995 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-utilities\") pod \"redhat-marketplace-gst8b\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.955044 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-catalog-content\") pod \"redhat-marketplace-gst8b\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.955764 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-catalog-content\") pod \"redhat-marketplace-gst8b\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.955808 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-utilities\") pod \"redhat-marketplace-gst8b\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:20 crc kubenswrapper[5016]: I1011 08:42:20.989013 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnmvv\" (UniqueName: \"kubernetes.io/projected/e40bb01c-eacd-42b4-951d-ddee56bd532d-kube-api-access-wnmvv\") pod \"redhat-marketplace-gst8b\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:21 crc kubenswrapper[5016]: I1011 08:42:21.159055 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:21 crc kubenswrapper[5016]: I1011 08:42:21.649261 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gst8b"] Oct 11 08:42:21 crc kubenswrapper[5016]: W1011 08:42:21.658125 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode40bb01c_eacd_42b4_951d_ddee56bd532d.slice/crio-b65992ef0b3d13e3cc476ecf3e38d262d07242a5de12b28b1ee4b8d3acf25a20 WatchSource:0}: Error finding container b65992ef0b3d13e3cc476ecf3e38d262d07242a5de12b28b1ee4b8d3acf25a20: Status 404 returned error can't find the container with id b65992ef0b3d13e3cc476ecf3e38d262d07242a5de12b28b1ee4b8d3acf25a20 Oct 11 08:42:22 crc kubenswrapper[5016]: I1011 08:42:22.022251 5016 generic.go:334] "Generic (PLEG): container finished" podID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerID="1115995f578bbc6370407939a63cf60383c172b1b1adfee12c570a0bd9a6d065" exitCode=0 Oct 11 08:42:22 crc kubenswrapper[5016]: I1011 08:42:22.024416 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gst8b" event={"ID":"e40bb01c-eacd-42b4-951d-ddee56bd532d","Type":"ContainerDied","Data":"1115995f578bbc6370407939a63cf60383c172b1b1adfee12c570a0bd9a6d065"} Oct 11 08:42:22 crc kubenswrapper[5016]: I1011 08:42:22.024600 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gst8b" event={"ID":"e40bb01c-eacd-42b4-951d-ddee56bd532d","Type":"ContainerStarted","Data":"b65992ef0b3d13e3cc476ecf3e38d262d07242a5de12b28b1ee4b8d3acf25a20"} Oct 11 08:42:25 crc kubenswrapper[5016]: I1011 08:42:25.073905 5016 generic.go:334] "Generic (PLEG): container finished" podID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerID="369e7d8b60f6cc95d6cf9d16119efb951ad7f5b225f68a320797d38ceae40655" exitCode=0 Oct 11 08:42:25 crc kubenswrapper[5016]: I1011 08:42:25.073958 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gst8b" event={"ID":"e40bb01c-eacd-42b4-951d-ddee56bd532d","Type":"ContainerDied","Data":"369e7d8b60f6cc95d6cf9d16119efb951ad7f5b225f68a320797d38ceae40655"} Oct 11 08:42:25 crc kubenswrapper[5016]: I1011 08:42:25.134248 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:42:25 crc kubenswrapper[5016]: E1011 08:42:25.134890 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:42:26 crc kubenswrapper[5016]: I1011 08:42:26.086531 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gst8b" event={"ID":"e40bb01c-eacd-42b4-951d-ddee56bd532d","Type":"ContainerStarted","Data":"f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299"} Oct 11 08:42:26 crc kubenswrapper[5016]: I1011 08:42:26.107237 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gst8b" podStartSLOduration=3.439656344 podStartE2EDuration="6.107222377s" podCreationTimestamp="2025-10-11 08:42:20 +0000 UTC" firstStartedPulling="2025-10-11 08:42:23.042039754 +0000 UTC m=+3730.942495740" lastFinishedPulling="2025-10-11 08:42:25.709605817 +0000 UTC m=+3733.610061773" observedRunningTime="2025-10-11 08:42:26.105529031 +0000 UTC m=+3734.005984987" watchObservedRunningTime="2025-10-11 08:42:26.107222377 +0000 UTC m=+3734.007678323" Oct 11 08:42:31 crc kubenswrapper[5016]: I1011 08:42:31.161018 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:31 crc kubenswrapper[5016]: I1011 08:42:31.161596 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:31 crc kubenswrapper[5016]: I1011 08:42:31.263092 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:32 crc kubenswrapper[5016]: I1011 08:42:32.255363 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:32 crc kubenswrapper[5016]: I1011 08:42:32.327971 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gst8b"] Oct 11 08:42:34 crc kubenswrapper[5016]: I1011 08:42:34.186093 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gst8b" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerName="registry-server" containerID="cri-o://f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299" gracePeriod=2 Oct 11 08:42:38 crc kubenswrapper[5016]: I1011 08:42:38.909334 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Oct 11 08:42:40 crc kubenswrapper[5016]: I1011 08:42:40.133368 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:42:40 crc kubenswrapper[5016]: E1011 08:42:40.134179 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:42:41 crc kubenswrapper[5016]: E1011 08:42:41.162295 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299 is running failed: container process not found" containerID="f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 08:42:41 crc kubenswrapper[5016]: E1011 08:42:41.162799 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299 is running failed: container process not found" containerID="f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 08:42:41 crc kubenswrapper[5016]: E1011 08:42:41.163393 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299 is running failed: container process not found" containerID="f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 08:42:41 crc kubenswrapper[5016]: E1011 08:42:41.163438 5016 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gst8b" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerName="registry-server" Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.370104 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gst8b_e40bb01c-eacd-42b4-951d-ddee56bd532d/registry-server/0.log" Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.371341 5016 generic.go:334] "Generic (PLEG): container finished" podID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerID="f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299" exitCode=137 Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.371377 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gst8b" event={"ID":"e40bb01c-eacd-42b4-951d-ddee56bd532d","Type":"ContainerDied","Data":"f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299"} Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.477575 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gst8b_e40bb01c-eacd-42b4-951d-ddee56bd532d/registry-server/0.log" Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.478282 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.548561 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnmvv\" (UniqueName: \"kubernetes.io/projected/e40bb01c-eacd-42b4-951d-ddee56bd532d-kube-api-access-wnmvv\") pod \"e40bb01c-eacd-42b4-951d-ddee56bd532d\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.548626 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-utilities\") pod \"e40bb01c-eacd-42b4-951d-ddee56bd532d\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.548941 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-catalog-content\") pod \"e40bb01c-eacd-42b4-951d-ddee56bd532d\" (UID: \"e40bb01c-eacd-42b4-951d-ddee56bd532d\") " Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.549738 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-utilities" (OuterVolumeSpecName: "utilities") pod "e40bb01c-eacd-42b4-951d-ddee56bd532d" (UID: "e40bb01c-eacd-42b4-951d-ddee56bd532d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.567094 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e40bb01c-eacd-42b4-951d-ddee56bd532d" (UID: "e40bb01c-eacd-42b4-951d-ddee56bd532d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.571000 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40bb01c-eacd-42b4-951d-ddee56bd532d-kube-api-access-wnmvv" (OuterVolumeSpecName: "kube-api-access-wnmvv") pod "e40bb01c-eacd-42b4-951d-ddee56bd532d" (UID: "e40bb01c-eacd-42b4-951d-ddee56bd532d"). InnerVolumeSpecName "kube-api-access-wnmvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.651407 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.651446 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnmvv\" (UniqueName: \"kubernetes.io/projected/e40bb01c-eacd-42b4-951d-ddee56bd532d-kube-api-access-wnmvv\") on node \"crc\" DevicePath \"\"" Oct 11 08:42:41 crc kubenswrapper[5016]: I1011 08:42:41.651460 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e40bb01c-eacd-42b4-951d-ddee56bd532d-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:42:42 crc kubenswrapper[5016]: I1011 08:42:42.391547 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gst8b" event={"ID":"e40bb01c-eacd-42b4-951d-ddee56bd532d","Type":"ContainerDied","Data":"b65992ef0b3d13e3cc476ecf3e38d262d07242a5de12b28b1ee4b8d3acf25a20"} Oct 11 08:42:42 crc kubenswrapper[5016]: I1011 08:42:42.391707 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gst8b" Oct 11 08:42:42 crc kubenswrapper[5016]: I1011 08:42:42.392687 5016 scope.go:117] "RemoveContainer" containerID="f303e2d6ae7a3add2a8df981dfed50ebbc5cd8f58cd119f7dd8fadd29de93299" Oct 11 08:42:42 crc kubenswrapper[5016]: I1011 08:42:42.426454 5016 scope.go:117] "RemoveContainer" containerID="369e7d8b60f6cc95d6cf9d16119efb951ad7f5b225f68a320797d38ceae40655" Oct 11 08:42:42 crc kubenswrapper[5016]: I1011 08:42:42.454969 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gst8b"] Oct 11 08:42:42 crc kubenswrapper[5016]: I1011 08:42:42.461586 5016 scope.go:117] "RemoveContainer" containerID="1115995f578bbc6370407939a63cf60383c172b1b1adfee12c570a0bd9a6d065" Oct 11 08:42:42 crc kubenswrapper[5016]: I1011 08:42:42.466254 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gst8b"] Oct 11 08:42:43 crc kubenswrapper[5016]: I1011 08:42:43.143250 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" path="/var/lib/kubelet/pods/e40bb01c-eacd-42b4-951d-ddee56bd532d/volumes" Oct 11 08:42:52 crc kubenswrapper[5016]: I1011 08:42:52.132990 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:42:52 crc kubenswrapper[5016]: E1011 08:42:52.133754 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:43:06 crc kubenswrapper[5016]: I1011 08:43:06.047903 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-r2942"] Oct 11 08:43:06 crc kubenswrapper[5016]: I1011 08:43:06.056437 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-r2942"] Oct 11 08:43:06 crc kubenswrapper[5016]: I1011 08:43:06.133184 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:43:06 crc kubenswrapper[5016]: E1011 08:43:06.133528 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:43:07 crc kubenswrapper[5016]: I1011 08:43:07.144385 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed6fb59-8a64-4859-9abe-acd0743490c6" path="/var/lib/kubelet/pods/aed6fb59-8a64-4859-9abe-acd0743490c6/volumes" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.282973 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cjpxs"] Oct 11 08:43:10 crc kubenswrapper[5016]: E1011 08:43:10.283743 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerName="registry-server" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.283760 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerName="registry-server" Oct 11 08:43:10 crc kubenswrapper[5016]: E1011 08:43:10.283833 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerName="extract-content" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.283842 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerName="extract-content" Oct 11 08:43:10 crc kubenswrapper[5016]: E1011 08:43:10.283866 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerName="extract-utilities" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.283876 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerName="extract-utilities" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.284102 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40bb01c-eacd-42b4-951d-ddee56bd532d" containerName="registry-server" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.285764 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.295370 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjpxs"] Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.348070 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-utilities\") pod \"certified-operators-cjpxs\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.348133 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-catalog-content\") pod \"certified-operators-cjpxs\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.348220 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf466\" (UniqueName: \"kubernetes.io/projected/ddaa8711-ce19-4019-9ef3-62a97c23f211-kube-api-access-vf466\") pod \"certified-operators-cjpxs\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.450423 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-utilities\") pod \"certified-operators-cjpxs\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.450535 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-catalog-content\") pod \"certified-operators-cjpxs\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.450633 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf466\" (UniqueName: \"kubernetes.io/projected/ddaa8711-ce19-4019-9ef3-62a97c23f211-kube-api-access-vf466\") pod \"certified-operators-cjpxs\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.451099 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-utilities\") pod \"certified-operators-cjpxs\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.453530 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-catalog-content\") pod \"certified-operators-cjpxs\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.606948 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf466\" (UniqueName: \"kubernetes.io/projected/ddaa8711-ce19-4019-9ef3-62a97c23f211-kube-api-access-vf466\") pod \"certified-operators-cjpxs\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:10 crc kubenswrapper[5016]: I1011 08:43:10.662520 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:11 crc kubenswrapper[5016]: I1011 08:43:11.185425 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjpxs"] Oct 11 08:43:11 crc kubenswrapper[5016]: I1011 08:43:11.677457 5016 generic.go:334] "Generic (PLEG): container finished" podID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerID="934778d7b82920fd247eece5a092b144f226ddcf6b76fa414725e27ac526860d" exitCode=0 Oct 11 08:43:11 crc kubenswrapper[5016]: I1011 08:43:11.677514 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjpxs" event={"ID":"ddaa8711-ce19-4019-9ef3-62a97c23f211","Type":"ContainerDied","Data":"934778d7b82920fd247eece5a092b144f226ddcf6b76fa414725e27ac526860d"} Oct 11 08:43:11 crc kubenswrapper[5016]: I1011 08:43:11.677853 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjpxs" event={"ID":"ddaa8711-ce19-4019-9ef3-62a97c23f211","Type":"ContainerStarted","Data":"69fdd1f561d568ee9deaf378bac9d5652991caf5e11d0d0f963985c9e07d14a9"} Oct 11 08:43:13 crc kubenswrapper[5016]: I1011 08:43:13.701249 5016 generic.go:334] "Generic (PLEG): container finished" podID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerID="83b2132dc87718418ffc4afdf1ba22e4eed9dbade12e755b5c3f72fe7669ca8c" exitCode=0 Oct 11 08:43:13 crc kubenswrapper[5016]: I1011 08:43:13.701385 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjpxs" event={"ID":"ddaa8711-ce19-4019-9ef3-62a97c23f211","Type":"ContainerDied","Data":"83b2132dc87718418ffc4afdf1ba22e4eed9dbade12e755b5c3f72fe7669ca8c"} Oct 11 08:43:15 crc kubenswrapper[5016]: I1011 08:43:15.725883 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjpxs" event={"ID":"ddaa8711-ce19-4019-9ef3-62a97c23f211","Type":"ContainerStarted","Data":"b98dd7420838d0e1aa32c9bffb03f98b6abb50c763295d1d7452295a28e4dc0b"} Oct 11 08:43:17 crc kubenswrapper[5016]: I1011 08:43:17.031749 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cjpxs" podStartSLOduration=4.200847305 podStartE2EDuration="7.03162001s" podCreationTimestamp="2025-10-11 08:43:10 +0000 UTC" firstStartedPulling="2025-10-11 08:43:11.680688383 +0000 UTC m=+3779.581144329" lastFinishedPulling="2025-10-11 08:43:14.511461048 +0000 UTC m=+3782.411917034" observedRunningTime="2025-10-11 08:43:15.755783584 +0000 UTC m=+3783.656239550" watchObservedRunningTime="2025-10-11 08:43:17.03162001 +0000 UTC m=+3784.932075966" Oct 11 08:43:17 crc kubenswrapper[5016]: I1011 08:43:17.045167 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-8c7a-account-create-2r7zt"] Oct 11 08:43:17 crc kubenswrapper[5016]: I1011 08:43:17.056344 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-8c7a-account-create-2r7zt"] Oct 11 08:43:17 crc kubenswrapper[5016]: I1011 08:43:17.155947 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84f8322f-0668-4ed4-bacd-3b7b236fa51d" path="/var/lib/kubelet/pods/84f8322f-0668-4ed4-bacd-3b7b236fa51d/volumes" Oct 11 08:43:20 crc kubenswrapper[5016]: I1011 08:43:20.662942 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:20 crc kubenswrapper[5016]: I1011 08:43:20.663059 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:21 crc kubenswrapper[5016]: I1011 08:43:21.133783 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:43:21 crc kubenswrapper[5016]: E1011 08:43:21.135167 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:43:21 crc kubenswrapper[5016]: I1011 08:43:21.478392 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:21 crc kubenswrapper[5016]: I1011 08:43:21.534779 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:21 crc kubenswrapper[5016]: I1011 08:43:21.719483 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjpxs"] Oct 11 08:43:22 crc kubenswrapper[5016]: I1011 08:43:22.795894 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cjpxs" podUID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerName="registry-server" containerID="cri-o://b98dd7420838d0e1aa32c9bffb03f98b6abb50c763295d1d7452295a28e4dc0b" gracePeriod=2 Oct 11 08:43:23 crc kubenswrapper[5016]: I1011 08:43:23.811980 5016 generic.go:334] "Generic (PLEG): container finished" podID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerID="b98dd7420838d0e1aa32c9bffb03f98b6abb50c763295d1d7452295a28e4dc0b" exitCode=0 Oct 11 08:43:23 crc kubenswrapper[5016]: I1011 08:43:23.812074 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjpxs" event={"ID":"ddaa8711-ce19-4019-9ef3-62a97c23f211","Type":"ContainerDied","Data":"b98dd7420838d0e1aa32c9bffb03f98b6abb50c763295d1d7452295a28e4dc0b"} Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.030816 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.077605 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-utilities\") pod \"ddaa8711-ce19-4019-9ef3-62a97c23f211\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.077811 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf466\" (UniqueName: \"kubernetes.io/projected/ddaa8711-ce19-4019-9ef3-62a97c23f211-kube-api-access-vf466\") pod \"ddaa8711-ce19-4019-9ef3-62a97c23f211\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.077898 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-catalog-content\") pod \"ddaa8711-ce19-4019-9ef3-62a97c23f211\" (UID: \"ddaa8711-ce19-4019-9ef3-62a97c23f211\") " Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.080257 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-utilities" (OuterVolumeSpecName: "utilities") pod "ddaa8711-ce19-4019-9ef3-62a97c23f211" (UID: "ddaa8711-ce19-4019-9ef3-62a97c23f211"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.093081 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddaa8711-ce19-4019-9ef3-62a97c23f211-kube-api-access-vf466" (OuterVolumeSpecName: "kube-api-access-vf466") pod "ddaa8711-ce19-4019-9ef3-62a97c23f211" (UID: "ddaa8711-ce19-4019-9ef3-62a97c23f211"). InnerVolumeSpecName "kube-api-access-vf466". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.135561 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddaa8711-ce19-4019-9ef3-62a97c23f211" (UID: "ddaa8711-ce19-4019-9ef3-62a97c23f211"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.179706 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf466\" (UniqueName: \"kubernetes.io/projected/ddaa8711-ce19-4019-9ef3-62a97c23f211-kube-api-access-vf466\") on node \"crc\" DevicePath \"\"" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.180008 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.180021 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddaa8711-ce19-4019-9ef3-62a97c23f211-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.823561 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjpxs" event={"ID":"ddaa8711-ce19-4019-9ef3-62a97c23f211","Type":"ContainerDied","Data":"69fdd1f561d568ee9deaf378bac9d5652991caf5e11d0d0f963985c9e07d14a9"} Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.823625 5016 scope.go:117] "RemoveContainer" containerID="b98dd7420838d0e1aa32c9bffb03f98b6abb50c763295d1d7452295a28e4dc0b" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.823789 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjpxs" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.856837 5016 scope.go:117] "RemoveContainer" containerID="83b2132dc87718418ffc4afdf1ba22e4eed9dbade12e755b5c3f72fe7669ca8c" Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.868363 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjpxs"] Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.875367 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cjpxs"] Oct 11 08:43:24 crc kubenswrapper[5016]: I1011 08:43:24.906330 5016 scope.go:117] "RemoveContainer" containerID="934778d7b82920fd247eece5a092b144f226ddcf6b76fa414725e27ac526860d" Oct 11 08:43:25 crc kubenswrapper[5016]: I1011 08:43:25.144298 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddaa8711-ce19-4019-9ef3-62a97c23f211" path="/var/lib/kubelet/pods/ddaa8711-ce19-4019-9ef3-62a97c23f211/volumes" Oct 11 08:43:33 crc kubenswrapper[5016]: I1011 08:43:33.151493 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:43:33 crc kubenswrapper[5016]: E1011 08:43:33.154572 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:43:36 crc kubenswrapper[5016]: I1011 08:43:36.078972 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-gbkrg"] Oct 11 08:43:36 crc kubenswrapper[5016]: I1011 08:43:36.088241 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-gbkrg"] Oct 11 08:43:37 crc kubenswrapper[5016]: I1011 08:43:37.145522 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a" path="/var/lib/kubelet/pods/7cb3ff3c-fa0a-4fc0-95ee-d3a1117a8e1a/volumes" Oct 11 08:43:41 crc kubenswrapper[5016]: I1011 08:43:41.981397 5016 scope.go:117] "RemoveContainer" containerID="961393cdd1fa5ef442f33181139b0125a4b39705a5f9f109adf2341260a0336e" Oct 11 08:43:42 crc kubenswrapper[5016]: I1011 08:43:42.022946 5016 scope.go:117] "RemoveContainer" containerID="655a35244433d6caae1d0cdfcf892ddae188023f6a35b1ad4f0620bb971c89c8" Oct 11 08:43:42 crc kubenswrapper[5016]: I1011 08:43:42.058534 5016 scope.go:117] "RemoveContainer" containerID="6bc1737711c9a83d75d3474926110f4170639840db91eb7179245e4d6945e50d" Oct 11 08:43:44 crc kubenswrapper[5016]: I1011 08:43:44.133445 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:43:44 crc kubenswrapper[5016]: E1011 08:43:44.134523 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:43:56 crc kubenswrapper[5016]: I1011 08:43:56.133110 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:43:56 crc kubenswrapper[5016]: E1011 08:43:56.133950 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.762006 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xjlj6"] Oct 11 08:44:08 crc kubenswrapper[5016]: E1011 08:44:08.762957 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerName="extract-utilities" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.762969 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerName="extract-utilities" Oct 11 08:44:08 crc kubenswrapper[5016]: E1011 08:44:08.762979 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerName="extract-content" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.762985 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerName="extract-content" Oct 11 08:44:08 crc kubenswrapper[5016]: E1011 08:44:08.763019 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerName="registry-server" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.763025 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerName="registry-server" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.763218 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddaa8711-ce19-4019-9ef3-62a97c23f211" containerName="registry-server" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.764472 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.778764 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xjlj6"] Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.811082 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-utilities\") pod \"redhat-operators-xjlj6\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.811176 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l49q\" (UniqueName: \"kubernetes.io/projected/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-kube-api-access-6l49q\") pod \"redhat-operators-xjlj6\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.811200 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-catalog-content\") pod \"redhat-operators-xjlj6\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.914218 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-utilities\") pod \"redhat-operators-xjlj6\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.914358 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l49q\" (UniqueName: \"kubernetes.io/projected/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-kube-api-access-6l49q\") pod \"redhat-operators-xjlj6\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.914394 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-catalog-content\") pod \"redhat-operators-xjlj6\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.915126 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-catalog-content\") pod \"redhat-operators-xjlj6\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.915514 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-utilities\") pod \"redhat-operators-xjlj6\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:08 crc kubenswrapper[5016]: I1011 08:44:08.940400 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l49q\" (UniqueName: \"kubernetes.io/projected/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-kube-api-access-6l49q\") pod \"redhat-operators-xjlj6\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:09 crc kubenswrapper[5016]: I1011 08:44:09.086617 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:09 crc kubenswrapper[5016]: I1011 08:44:09.526998 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xjlj6"] Oct 11 08:44:10 crc kubenswrapper[5016]: I1011 08:44:10.133575 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:44:10 crc kubenswrapper[5016]: E1011 08:44:10.134379 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:44:10 crc kubenswrapper[5016]: I1011 08:44:10.256867 5016 generic.go:334] "Generic (PLEG): container finished" podID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerID="5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23" exitCode=0 Oct 11 08:44:10 crc kubenswrapper[5016]: I1011 08:44:10.256909 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjlj6" event={"ID":"4c0dcabc-a6d9-4e86-85d3-f62328e2e671","Type":"ContainerDied","Data":"5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23"} Oct 11 08:44:10 crc kubenswrapper[5016]: I1011 08:44:10.256938 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjlj6" event={"ID":"4c0dcabc-a6d9-4e86-85d3-f62328e2e671","Type":"ContainerStarted","Data":"c22390176d3aebed4691a7564fb4be2fe51627b76056eb4515c48cec8dc5fa5a"} Oct 11 08:44:11 crc kubenswrapper[5016]: I1011 08:44:11.269066 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjlj6" event={"ID":"4c0dcabc-a6d9-4e86-85d3-f62328e2e671","Type":"ContainerStarted","Data":"632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a"} Oct 11 08:44:13 crc kubenswrapper[5016]: I1011 08:44:13.288793 5016 generic.go:334] "Generic (PLEG): container finished" podID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerID="632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a" exitCode=0 Oct 11 08:44:13 crc kubenswrapper[5016]: I1011 08:44:13.289331 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjlj6" event={"ID":"4c0dcabc-a6d9-4e86-85d3-f62328e2e671","Type":"ContainerDied","Data":"632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a"} Oct 11 08:44:15 crc kubenswrapper[5016]: I1011 08:44:15.309858 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjlj6" event={"ID":"4c0dcabc-a6d9-4e86-85d3-f62328e2e671","Type":"ContainerStarted","Data":"a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c"} Oct 11 08:44:15 crc kubenswrapper[5016]: I1011 08:44:15.333553 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xjlj6" podStartSLOduration=3.384106559 podStartE2EDuration="7.33353505s" podCreationTimestamp="2025-10-11 08:44:08 +0000 UTC" firstStartedPulling="2025-10-11 08:44:10.258858628 +0000 UTC m=+3838.159314574" lastFinishedPulling="2025-10-11 08:44:14.208287129 +0000 UTC m=+3842.108743065" observedRunningTime="2025-10-11 08:44:15.327548851 +0000 UTC m=+3843.228004807" watchObservedRunningTime="2025-10-11 08:44:15.33353505 +0000 UTC m=+3843.233990996" Oct 11 08:44:19 crc kubenswrapper[5016]: I1011 08:44:19.087281 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:19 crc kubenswrapper[5016]: I1011 08:44:19.088045 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:20 crc kubenswrapper[5016]: I1011 08:44:20.160275 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xjlj6" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerName="registry-server" probeResult="failure" output=< Oct 11 08:44:20 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 08:44:20 crc kubenswrapper[5016]: > Oct 11 08:44:22 crc kubenswrapper[5016]: I1011 08:44:22.133004 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:44:22 crc kubenswrapper[5016]: E1011 08:44:22.133320 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:44:29 crc kubenswrapper[5016]: I1011 08:44:29.160843 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:29 crc kubenswrapper[5016]: I1011 08:44:29.226935 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:29 crc kubenswrapper[5016]: I1011 08:44:29.419209 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xjlj6"] Oct 11 08:44:30 crc kubenswrapper[5016]: I1011 08:44:30.464882 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xjlj6" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerName="registry-server" containerID="cri-o://a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c" gracePeriod=2 Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.039174 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.113149 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l49q\" (UniqueName: \"kubernetes.io/projected/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-kube-api-access-6l49q\") pod \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.113241 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-catalog-content\") pod \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.113611 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-utilities\") pod \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\" (UID: \"4c0dcabc-a6d9-4e86-85d3-f62328e2e671\") " Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.120568 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-utilities" (OuterVolumeSpecName: "utilities") pod "4c0dcabc-a6d9-4e86-85d3-f62328e2e671" (UID: "4c0dcabc-a6d9-4e86-85d3-f62328e2e671"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.142336 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-kube-api-access-6l49q" (OuterVolumeSpecName: "kube-api-access-6l49q") pod "4c0dcabc-a6d9-4e86-85d3-f62328e2e671" (UID: "4c0dcabc-a6d9-4e86-85d3-f62328e2e671"). InnerVolumeSpecName "kube-api-access-6l49q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.216351 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.216383 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l49q\" (UniqueName: \"kubernetes.io/projected/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-kube-api-access-6l49q\") on node \"crc\" DevicePath \"\"" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.218032 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c0dcabc-a6d9-4e86-85d3-f62328e2e671" (UID: "4c0dcabc-a6d9-4e86-85d3-f62328e2e671"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.318472 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c0dcabc-a6d9-4e86-85d3-f62328e2e671-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.477455 5016 generic.go:334] "Generic (PLEG): container finished" podID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerID="a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c" exitCode=0 Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.477515 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjlj6" event={"ID":"4c0dcabc-a6d9-4e86-85d3-f62328e2e671","Type":"ContainerDied","Data":"a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c"} Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.477566 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xjlj6" event={"ID":"4c0dcabc-a6d9-4e86-85d3-f62328e2e671","Type":"ContainerDied","Data":"c22390176d3aebed4691a7564fb4be2fe51627b76056eb4515c48cec8dc5fa5a"} Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.477570 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xjlj6" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.477594 5016 scope.go:117] "RemoveContainer" containerID="a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.518645 5016 scope.go:117] "RemoveContainer" containerID="632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.520457 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xjlj6"] Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.527692 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xjlj6"] Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.546858 5016 scope.go:117] "RemoveContainer" containerID="5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.609141 5016 scope.go:117] "RemoveContainer" containerID="a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c" Oct 11 08:44:31 crc kubenswrapper[5016]: E1011 08:44:31.610065 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c\": container with ID starting with a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c not found: ID does not exist" containerID="a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.610099 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c"} err="failed to get container status \"a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c\": rpc error: code = NotFound desc = could not find container \"a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c\": container with ID starting with a788b2df1fb4b2f625a39307c8ba4db88f7c5f716b95db509c26735c68f8068c not found: ID does not exist" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.610139 5016 scope.go:117] "RemoveContainer" containerID="632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a" Oct 11 08:44:31 crc kubenswrapper[5016]: E1011 08:44:31.610511 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a\": container with ID starting with 632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a not found: ID does not exist" containerID="632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.610558 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a"} err="failed to get container status \"632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a\": rpc error: code = NotFound desc = could not find container \"632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a\": container with ID starting with 632afac4fd37d4eb2d97389ea5f72eb3f742aaebd5c7de7144054d1cbe9f231a not found: ID does not exist" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.610573 5016 scope.go:117] "RemoveContainer" containerID="5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23" Oct 11 08:44:31 crc kubenswrapper[5016]: E1011 08:44:31.610877 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23\": container with ID starting with 5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23 not found: ID does not exist" containerID="5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23" Oct 11 08:44:31 crc kubenswrapper[5016]: I1011 08:44:31.610918 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23"} err="failed to get container status \"5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23\": rpc error: code = NotFound desc = could not find container \"5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23\": container with ID starting with 5e746408f9f425bb65d92ffd6fc4fbf38226a6266dbe194376e5e426b49caa23 not found: ID does not exist" Oct 11 08:44:33 crc kubenswrapper[5016]: I1011 08:44:33.142170 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:44:33 crc kubenswrapper[5016]: E1011 08:44:33.143000 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:44:33 crc kubenswrapper[5016]: I1011 08:44:33.144749 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" path="/var/lib/kubelet/pods/4c0dcabc-a6d9-4e86-85d3-f62328e2e671/volumes" Oct 11 08:44:48 crc kubenswrapper[5016]: I1011 08:44:48.133668 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:44:48 crc kubenswrapper[5016]: I1011 08:44:48.627426 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"29a16057090ebd432c8022638c5e84d7f6eff5fe06fd4cea789dacfdafcb7bd6"} Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.149813 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp"] Oct 11 08:45:00 crc kubenswrapper[5016]: E1011 08:45:00.150934 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerName="extract-content" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.150948 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerName="extract-content" Oct 11 08:45:00 crc kubenswrapper[5016]: E1011 08:45:00.150963 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerName="extract-utilities" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.150969 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerName="extract-utilities" Oct 11 08:45:00 crc kubenswrapper[5016]: E1011 08:45:00.150981 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerName="registry-server" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.150986 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerName="registry-server" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.151206 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0dcabc-a6d9-4e86-85d3-f62328e2e671" containerName="registry-server" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.151952 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.154380 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.154588 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.160619 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp"] Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.186729 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-secret-volume\") pod \"collect-profiles-29336205-28hjp\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.188047 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-config-volume\") pod \"collect-profiles-29336205-28hjp\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.188382 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f484x\" (UniqueName: \"kubernetes.io/projected/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-kube-api-access-f484x\") pod \"collect-profiles-29336205-28hjp\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.290557 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-config-volume\") pod \"collect-profiles-29336205-28hjp\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.290634 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f484x\" (UniqueName: \"kubernetes.io/projected/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-kube-api-access-f484x\") pod \"collect-profiles-29336205-28hjp\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.290806 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-secret-volume\") pod \"collect-profiles-29336205-28hjp\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.291534 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-config-volume\") pod \"collect-profiles-29336205-28hjp\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:00 crc kubenswrapper[5016]: I1011 08:45:00.298972 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-secret-volume\") pod \"collect-profiles-29336205-28hjp\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:01 crc kubenswrapper[5016]: I1011 08:45:01.006752 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f484x\" (UniqueName: \"kubernetes.io/projected/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-kube-api-access-f484x\") pod \"collect-profiles-29336205-28hjp\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:01 crc kubenswrapper[5016]: I1011 08:45:01.078245 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:01 crc kubenswrapper[5016]: I1011 08:45:01.653210 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp"] Oct 11 08:45:01 crc kubenswrapper[5016]: I1011 08:45:01.755572 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" event={"ID":"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6","Type":"ContainerStarted","Data":"f7f21cb0fe6004b8856d3685b71aa5d301095cc4dbddb25da559f3abc5ea1ead"} Oct 11 08:45:02 crc kubenswrapper[5016]: I1011 08:45:02.766477 5016 generic.go:334] "Generic (PLEG): container finished" podID="97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6" containerID="74324286390ebf253f90849a95ec729f86ccae930142f1f62b6d9593cc5d5ab7" exitCode=0 Oct 11 08:45:02 crc kubenswrapper[5016]: I1011 08:45:02.766543 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" event={"ID":"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6","Type":"ContainerDied","Data":"74324286390ebf253f90849a95ec729f86ccae930142f1f62b6d9593cc5d5ab7"} Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.293569 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.373416 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f484x\" (UniqueName: \"kubernetes.io/projected/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-kube-api-access-f484x\") pod \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.373626 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-secret-volume\") pod \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.373823 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-config-volume\") pod \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\" (UID: \"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6\") " Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.374543 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-config-volume" (OuterVolumeSpecName: "config-volume") pod "97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6" (UID: "97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.380984 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6" (UID: "97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.385812 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-kube-api-access-f484x" (OuterVolumeSpecName: "kube-api-access-f484x") pod "97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6" (UID: "97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6"). InnerVolumeSpecName "kube-api-access-f484x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.477051 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f484x\" (UniqueName: \"kubernetes.io/projected/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-kube-api-access-f484x\") on node \"crc\" DevicePath \"\"" Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.477097 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.477156 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.788055 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" event={"ID":"97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6","Type":"ContainerDied","Data":"f7f21cb0fe6004b8856d3685b71aa5d301095cc4dbddb25da559f3abc5ea1ead"} Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.788096 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7f21cb0fe6004b8856d3685b71aa5d301095cc4dbddb25da559f3abc5ea1ead" Oct 11 08:45:04 crc kubenswrapper[5016]: I1011 08:45:04.788126 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp" Oct 11 08:45:05 crc kubenswrapper[5016]: I1011 08:45:05.387965 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx"] Oct 11 08:45:05 crc kubenswrapper[5016]: I1011 08:45:05.394777 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336160-9fjnx"] Oct 11 08:45:07 crc kubenswrapper[5016]: I1011 08:45:07.153830 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d96a07-ce5d-47d7-aad4-30553dd060ad" path="/var/lib/kubelet/pods/05d96a07-ce5d-47d7-aad4-30553dd060ad/volumes" Oct 11 08:45:42 crc kubenswrapper[5016]: I1011 08:45:42.255239 5016 scope.go:117] "RemoveContainer" containerID="33cb1358fc7c65916c32a00259dc4c9fff7d10e89e2a1a3ff80cf94b877fa57a" Oct 11 08:47:07 crc kubenswrapper[5016]: I1011 08:47:07.122642 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:47:07 crc kubenswrapper[5016]: I1011 08:47:07.123381 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:47:37 crc kubenswrapper[5016]: I1011 08:47:37.122312 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:47:37 crc kubenswrapper[5016]: I1011 08:47:37.122888 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:48:07 crc kubenswrapper[5016]: I1011 08:48:07.122945 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:48:07 crc kubenswrapper[5016]: I1011 08:48:07.123580 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:48:07 crc kubenswrapper[5016]: I1011 08:48:07.123619 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:48:07 crc kubenswrapper[5016]: I1011 08:48:07.124310 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29a16057090ebd432c8022638c5e84d7f6eff5fe06fd4cea789dacfdafcb7bd6"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:48:07 crc kubenswrapper[5016]: I1011 08:48:07.124352 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://29a16057090ebd432c8022638c5e84d7f6eff5fe06fd4cea789dacfdafcb7bd6" gracePeriod=600 Oct 11 08:48:07 crc kubenswrapper[5016]: I1011 08:48:07.640854 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="29a16057090ebd432c8022638c5e84d7f6eff5fe06fd4cea789dacfdafcb7bd6" exitCode=0 Oct 11 08:48:07 crc kubenswrapper[5016]: I1011 08:48:07.641168 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"29a16057090ebd432c8022638c5e84d7f6eff5fe06fd4cea789dacfdafcb7bd6"} Oct 11 08:48:07 crc kubenswrapper[5016]: I1011 08:48:07.641193 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409"} Oct 11 08:48:07 crc kubenswrapper[5016]: I1011 08:48:07.641208 5016 scope.go:117] "RemoveContainer" containerID="0f577e79e38ff6c2eb5212a439ccbe69f2ec1833097741c61f7696413a530b6e" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.640460 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9gh4m"] Oct 11 08:49:41 crc kubenswrapper[5016]: E1011 08:49:41.641541 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6" containerName="collect-profiles" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.641554 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6" containerName="collect-profiles" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.641761 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6" containerName="collect-profiles" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.643037 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.666272 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gh4m"] Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.772405 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-utilities\") pod \"community-operators-9gh4m\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.772477 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hss2c\" (UniqueName: \"kubernetes.io/projected/9f3489df-94f9-427e-9863-b1999935e2fc-kube-api-access-hss2c\") pod \"community-operators-9gh4m\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.772689 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-catalog-content\") pod \"community-operators-9gh4m\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.874984 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-catalog-content\") pod \"community-operators-9gh4m\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.875053 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-utilities\") pod \"community-operators-9gh4m\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.875098 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hss2c\" (UniqueName: \"kubernetes.io/projected/9f3489df-94f9-427e-9863-b1999935e2fc-kube-api-access-hss2c\") pod \"community-operators-9gh4m\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.876279 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-catalog-content\") pod \"community-operators-9gh4m\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.876347 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-utilities\") pod \"community-operators-9gh4m\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.903737 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hss2c\" (UniqueName: \"kubernetes.io/projected/9f3489df-94f9-427e-9863-b1999935e2fc-kube-api-access-hss2c\") pod \"community-operators-9gh4m\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:41 crc kubenswrapper[5016]: I1011 08:49:41.972074 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:42 crc kubenswrapper[5016]: I1011 08:49:42.504751 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gh4m"] Oct 11 08:49:42 crc kubenswrapper[5016]: I1011 08:49:42.688780 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gh4m" event={"ID":"9f3489df-94f9-427e-9863-b1999935e2fc","Type":"ContainerStarted","Data":"3592f99af9eb8c6f284437460e176c578a9db4801396b8ba0b3b9fdb31002a13"} Oct 11 08:49:43 crc kubenswrapper[5016]: I1011 08:49:43.702921 5016 generic.go:334] "Generic (PLEG): container finished" podID="9f3489df-94f9-427e-9863-b1999935e2fc" containerID="5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9" exitCode=0 Oct 11 08:49:43 crc kubenswrapper[5016]: I1011 08:49:43.703196 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gh4m" event={"ID":"9f3489df-94f9-427e-9863-b1999935e2fc","Type":"ContainerDied","Data":"5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9"} Oct 11 08:49:43 crc kubenswrapper[5016]: I1011 08:49:43.707173 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 08:49:44 crc kubenswrapper[5016]: I1011 08:49:44.713650 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gh4m" event={"ID":"9f3489df-94f9-427e-9863-b1999935e2fc","Type":"ContainerStarted","Data":"cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2"} Oct 11 08:49:45 crc kubenswrapper[5016]: I1011 08:49:45.730243 5016 generic.go:334] "Generic (PLEG): container finished" podID="9f3489df-94f9-427e-9863-b1999935e2fc" containerID="cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2" exitCode=0 Oct 11 08:49:45 crc kubenswrapper[5016]: I1011 08:49:45.730312 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gh4m" event={"ID":"9f3489df-94f9-427e-9863-b1999935e2fc","Type":"ContainerDied","Data":"cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2"} Oct 11 08:49:46 crc kubenswrapper[5016]: I1011 08:49:46.742452 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gh4m" event={"ID":"9f3489df-94f9-427e-9863-b1999935e2fc","Type":"ContainerStarted","Data":"95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69"} Oct 11 08:49:46 crc kubenswrapper[5016]: I1011 08:49:46.766213 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9gh4m" podStartSLOduration=3.164440937 podStartE2EDuration="5.76619409s" podCreationTimestamp="2025-10-11 08:49:41 +0000 UTC" firstStartedPulling="2025-10-11 08:49:43.706963579 +0000 UTC m=+4171.607419525" lastFinishedPulling="2025-10-11 08:49:46.308716732 +0000 UTC m=+4174.209172678" observedRunningTime="2025-10-11 08:49:46.758792333 +0000 UTC m=+4174.659248279" watchObservedRunningTime="2025-10-11 08:49:46.76619409 +0000 UTC m=+4174.666650036" Oct 11 08:49:51 crc kubenswrapper[5016]: I1011 08:49:51.972354 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:51 crc kubenswrapper[5016]: I1011 08:49:51.972927 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:52 crc kubenswrapper[5016]: I1011 08:49:52.068317 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:52 crc kubenswrapper[5016]: I1011 08:49:52.871822 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:52 crc kubenswrapper[5016]: I1011 08:49:52.938105 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gh4m"] Oct 11 08:49:54 crc kubenswrapper[5016]: I1011 08:49:54.830893 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9gh4m" podUID="9f3489df-94f9-427e-9863-b1999935e2fc" containerName="registry-server" containerID="cri-o://95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69" gracePeriod=2 Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.775228 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.843725 5016 generic.go:334] "Generic (PLEG): container finished" podID="9f3489df-94f9-427e-9863-b1999935e2fc" containerID="95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69" exitCode=0 Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.843776 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gh4m" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.843776 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gh4m" event={"ID":"9f3489df-94f9-427e-9863-b1999935e2fc","Type":"ContainerDied","Data":"95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69"} Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.843927 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gh4m" event={"ID":"9f3489df-94f9-427e-9863-b1999935e2fc","Type":"ContainerDied","Data":"3592f99af9eb8c6f284437460e176c578a9db4801396b8ba0b3b9fdb31002a13"} Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.843959 5016 scope.go:117] "RemoveContainer" containerID="95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.872057 5016 scope.go:117] "RemoveContainer" containerID="cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.874114 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-utilities\") pod \"9f3489df-94f9-427e-9863-b1999935e2fc\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.874309 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hss2c\" (UniqueName: \"kubernetes.io/projected/9f3489df-94f9-427e-9863-b1999935e2fc-kube-api-access-hss2c\") pod \"9f3489df-94f9-427e-9863-b1999935e2fc\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.874342 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-catalog-content\") pod \"9f3489df-94f9-427e-9863-b1999935e2fc\" (UID: \"9f3489df-94f9-427e-9863-b1999935e2fc\") " Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.875767 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-utilities" (OuterVolumeSpecName: "utilities") pod "9f3489df-94f9-427e-9863-b1999935e2fc" (UID: "9f3489df-94f9-427e-9863-b1999935e2fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.884581 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3489df-94f9-427e-9863-b1999935e2fc-kube-api-access-hss2c" (OuterVolumeSpecName: "kube-api-access-hss2c") pod "9f3489df-94f9-427e-9863-b1999935e2fc" (UID: "9f3489df-94f9-427e-9863-b1999935e2fc"). InnerVolumeSpecName "kube-api-access-hss2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.939883 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f3489df-94f9-427e-9863-b1999935e2fc" (UID: "9f3489df-94f9-427e-9863-b1999935e2fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.941364 5016 scope.go:117] "RemoveContainer" containerID="5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.977492 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.977530 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hss2c\" (UniqueName: \"kubernetes.io/projected/9f3489df-94f9-427e-9863-b1999935e2fc-kube-api-access-hss2c\") on node \"crc\" DevicePath \"\"" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.977548 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f3489df-94f9-427e-9863-b1999935e2fc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.988416 5016 scope.go:117] "RemoveContainer" containerID="95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69" Oct 11 08:49:55 crc kubenswrapper[5016]: E1011 08:49:55.989456 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69\": container with ID starting with 95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69 not found: ID does not exist" containerID="95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.989540 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69"} err="failed to get container status \"95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69\": rpc error: code = NotFound desc = could not find container \"95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69\": container with ID starting with 95465bb9b6f281dfb9e00a9a5190668e5f4daedefee8ddb000a90548f1b13c69 not found: ID does not exist" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.989584 5016 scope.go:117] "RemoveContainer" containerID="cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2" Oct 11 08:49:55 crc kubenswrapper[5016]: E1011 08:49:55.990162 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2\": container with ID starting with cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2 not found: ID does not exist" containerID="cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.990200 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2"} err="failed to get container status \"cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2\": rpc error: code = NotFound desc = could not find container \"cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2\": container with ID starting with cc36ffdbd2f2addef1489e563c21f588ea27ec2519219dcc6cbed27cf51f3bb2 not found: ID does not exist" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.990247 5016 scope.go:117] "RemoveContainer" containerID="5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9" Oct 11 08:49:55 crc kubenswrapper[5016]: E1011 08:49:55.990627 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9\": container with ID starting with 5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9 not found: ID does not exist" containerID="5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9" Oct 11 08:49:55 crc kubenswrapper[5016]: I1011 08:49:55.990744 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9"} err="failed to get container status \"5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9\": rpc error: code = NotFound desc = could not find container \"5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9\": container with ID starting with 5a96f405eb68d4cee99cfa19b2ee9e5db3c7920487d3af30c8b9a66b3f0fcbd9 not found: ID does not exist" Oct 11 08:49:56 crc kubenswrapper[5016]: I1011 08:49:56.177216 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gh4m"] Oct 11 08:49:56 crc kubenswrapper[5016]: I1011 08:49:56.186883 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9gh4m"] Oct 11 08:49:57 crc kubenswrapper[5016]: I1011 08:49:57.143876 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3489df-94f9-427e-9863-b1999935e2fc" path="/var/lib/kubelet/pods/9f3489df-94f9-427e-9863-b1999935e2fc/volumes" Oct 11 08:50:07 crc kubenswrapper[5016]: I1011 08:50:07.122386 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:50:07 crc kubenswrapper[5016]: I1011 08:50:07.123046 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:50:37 crc kubenswrapper[5016]: I1011 08:50:37.122083 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:50:37 crc kubenswrapper[5016]: I1011 08:50:37.123269 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:51:07 crc kubenswrapper[5016]: I1011 08:51:07.122722 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:51:07 crc kubenswrapper[5016]: I1011 08:51:07.123197 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:51:07 crc kubenswrapper[5016]: I1011 08:51:07.123238 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:51:07 crc kubenswrapper[5016]: I1011 08:51:07.123941 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:51:07 crc kubenswrapper[5016]: I1011 08:51:07.124004 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" gracePeriod=600 Oct 11 08:51:07 crc kubenswrapper[5016]: E1011 08:51:07.244734 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:51:07 crc kubenswrapper[5016]: I1011 08:51:07.502867 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" exitCode=0 Oct 11 08:51:07 crc kubenswrapper[5016]: I1011 08:51:07.502911 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409"} Oct 11 08:51:07 crc kubenswrapper[5016]: I1011 08:51:07.502952 5016 scope.go:117] "RemoveContainer" containerID="29a16057090ebd432c8022638c5e84d7f6eff5fe06fd4cea789dacfdafcb7bd6" Oct 11 08:51:07 crc kubenswrapper[5016]: I1011 08:51:07.504013 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:51:07 crc kubenswrapper[5016]: E1011 08:51:07.504550 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:51:20 crc kubenswrapper[5016]: I1011 08:51:20.133598 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:51:20 crc kubenswrapper[5016]: E1011 08:51:20.134449 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:51:32 crc kubenswrapper[5016]: I1011 08:51:32.133380 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:51:32 crc kubenswrapper[5016]: E1011 08:51:32.134399 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:51:44 crc kubenswrapper[5016]: I1011 08:51:44.134877 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:51:44 crc kubenswrapper[5016]: E1011 08:51:44.136246 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:51:59 crc kubenswrapper[5016]: I1011 08:51:59.133957 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:51:59 crc kubenswrapper[5016]: E1011 08:51:59.134949 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:52:13 crc kubenswrapper[5016]: I1011 08:52:13.149620 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:52:13 crc kubenswrapper[5016]: E1011 08:52:13.151193 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:52:27 crc kubenswrapper[5016]: I1011 08:52:27.136791 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:52:27 crc kubenswrapper[5016]: E1011 08:52:27.137848 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:52:41 crc kubenswrapper[5016]: I1011 08:52:41.133885 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:52:41 crc kubenswrapper[5016]: E1011 08:52:41.135309 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:52:55 crc kubenswrapper[5016]: I1011 08:52:55.133484 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:52:55 crc kubenswrapper[5016]: E1011 08:52:55.134168 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:52:56 crc kubenswrapper[5016]: I1011 08:52:56.850971 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kkqrw"] Oct 11 08:52:56 crc kubenswrapper[5016]: E1011 08:52:56.851772 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3489df-94f9-427e-9863-b1999935e2fc" containerName="extract-utilities" Oct 11 08:52:56 crc kubenswrapper[5016]: I1011 08:52:56.851792 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3489df-94f9-427e-9863-b1999935e2fc" containerName="extract-utilities" Oct 11 08:52:56 crc kubenswrapper[5016]: E1011 08:52:56.851826 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3489df-94f9-427e-9863-b1999935e2fc" containerName="extract-content" Oct 11 08:52:56 crc kubenswrapper[5016]: I1011 08:52:56.851835 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3489df-94f9-427e-9863-b1999935e2fc" containerName="extract-content" Oct 11 08:52:56 crc kubenswrapper[5016]: E1011 08:52:56.851858 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3489df-94f9-427e-9863-b1999935e2fc" containerName="registry-server" Oct 11 08:52:56 crc kubenswrapper[5016]: I1011 08:52:56.851866 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3489df-94f9-427e-9863-b1999935e2fc" containerName="registry-server" Oct 11 08:52:56 crc kubenswrapper[5016]: I1011 08:52:56.852063 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3489df-94f9-427e-9863-b1999935e2fc" containerName="registry-server" Oct 11 08:52:56 crc kubenswrapper[5016]: I1011 08:52:56.855542 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:56 crc kubenswrapper[5016]: I1011 08:52:56.863422 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkqrw"] Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.051942 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-catalog-content\") pod \"redhat-marketplace-kkqrw\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.052343 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-utilities\") pod \"redhat-marketplace-kkqrw\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.052646 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjq76\" (UniqueName: \"kubernetes.io/projected/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-kube-api-access-gjq76\") pod \"redhat-marketplace-kkqrw\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.154693 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-catalog-content\") pod \"redhat-marketplace-kkqrw\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.154759 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-utilities\") pod \"redhat-marketplace-kkqrw\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.154871 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjq76\" (UniqueName: \"kubernetes.io/projected/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-kube-api-access-gjq76\") pod \"redhat-marketplace-kkqrw\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.155140 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-catalog-content\") pod \"redhat-marketplace-kkqrw\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.155206 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-utilities\") pod \"redhat-marketplace-kkqrw\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.174964 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjq76\" (UniqueName: \"kubernetes.io/projected/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-kube-api-access-gjq76\") pod \"redhat-marketplace-kkqrw\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.179237 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:52:57 crc kubenswrapper[5016]: I1011 08:52:57.669780 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkqrw"] Oct 11 08:52:58 crc kubenswrapper[5016]: I1011 08:52:58.560860 5016 generic.go:334] "Generic (PLEG): container finished" podID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerID="070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb" exitCode=0 Oct 11 08:52:58 crc kubenswrapper[5016]: I1011 08:52:58.560952 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkqrw" event={"ID":"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340","Type":"ContainerDied","Data":"070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb"} Oct 11 08:52:58 crc kubenswrapper[5016]: I1011 08:52:58.561200 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkqrw" event={"ID":"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340","Type":"ContainerStarted","Data":"7123ee85602fe028cf0f9a7d7cdd33aadc764e02851375a5f8f48dd5ce62cc05"} Oct 11 08:52:59 crc kubenswrapper[5016]: I1011 08:52:59.574955 5016 generic.go:334] "Generic (PLEG): container finished" podID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerID="047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15" exitCode=0 Oct 11 08:52:59 crc kubenswrapper[5016]: I1011 08:52:59.575463 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkqrw" event={"ID":"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340","Type":"ContainerDied","Data":"047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15"} Oct 11 08:53:00 crc kubenswrapper[5016]: I1011 08:53:00.587428 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkqrw" event={"ID":"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340","Type":"ContainerStarted","Data":"931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f"} Oct 11 08:53:00 crc kubenswrapper[5016]: I1011 08:53:00.617381 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kkqrw" podStartSLOduration=3.196543423 podStartE2EDuration="4.617357857s" podCreationTimestamp="2025-10-11 08:52:56 +0000 UTC" firstStartedPulling="2025-10-11 08:52:58.564472767 +0000 UTC m=+4366.464928713" lastFinishedPulling="2025-10-11 08:52:59.985287171 +0000 UTC m=+4367.885743147" observedRunningTime="2025-10-11 08:53:00.607842765 +0000 UTC m=+4368.508298731" watchObservedRunningTime="2025-10-11 08:53:00.617357857 +0000 UTC m=+4368.517813803" Oct 11 08:53:07 crc kubenswrapper[5016]: I1011 08:53:07.135026 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:53:07 crc kubenswrapper[5016]: E1011 08:53:07.136177 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:53:07 crc kubenswrapper[5016]: I1011 08:53:07.180268 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:53:07 crc kubenswrapper[5016]: I1011 08:53:07.180335 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:53:07 crc kubenswrapper[5016]: I1011 08:53:07.254292 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:53:07 crc kubenswrapper[5016]: I1011 08:53:07.710769 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:53:07 crc kubenswrapper[5016]: I1011 08:53:07.766426 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkqrw"] Oct 11 08:53:09 crc kubenswrapper[5016]: I1011 08:53:09.671279 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kkqrw" podUID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerName="registry-server" containerID="cri-o://931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f" gracePeriod=2 Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.458154 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.510296 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-utilities\") pod \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.510474 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjq76\" (UniqueName: \"kubernetes.io/projected/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-kube-api-access-gjq76\") pod \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.510716 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-catalog-content\") pod \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\" (UID: \"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340\") " Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.511403 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-utilities" (OuterVolumeSpecName: "utilities") pod "dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" (UID: "dc9ed4d4-4a44-4a4a-afbe-e90a1a969340"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.528585 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" (UID: "dc9ed4d4-4a44-4a4a-afbe-e90a1a969340"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.530915 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-kube-api-access-gjq76" (OuterVolumeSpecName: "kube-api-access-gjq76") pod "dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" (UID: "dc9ed4d4-4a44-4a4a-afbe-e90a1a969340"). InnerVolumeSpecName "kube-api-access-gjq76". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.613167 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjq76\" (UniqueName: \"kubernetes.io/projected/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-kube-api-access-gjq76\") on node \"crc\" DevicePath \"\"" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.613197 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.613206 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.684880 5016 generic.go:334] "Generic (PLEG): container finished" podID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerID="931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f" exitCode=0 Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.684952 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkqrw" event={"ID":"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340","Type":"ContainerDied","Data":"931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f"} Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.685010 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkqrw" event={"ID":"dc9ed4d4-4a44-4a4a-afbe-e90a1a969340","Type":"ContainerDied","Data":"7123ee85602fe028cf0f9a7d7cdd33aadc764e02851375a5f8f48dd5ce62cc05"} Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.685038 5016 scope.go:117] "RemoveContainer" containerID="931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.685276 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkqrw" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.731943 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkqrw"] Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.736402 5016 scope.go:117] "RemoveContainer" containerID="047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.742472 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkqrw"] Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.769424 5016 scope.go:117] "RemoveContainer" containerID="070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.818802 5016 scope.go:117] "RemoveContainer" containerID="931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f" Oct 11 08:53:10 crc kubenswrapper[5016]: E1011 08:53:10.824063 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f\": container with ID starting with 931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f not found: ID does not exist" containerID="931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.824108 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f"} err="failed to get container status \"931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f\": rpc error: code = NotFound desc = could not find container \"931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f\": container with ID starting with 931e18aa9c4cbcd5ea0b0372633ddd603c6c366791b80b06429dc1eea043bf3f not found: ID does not exist" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.824137 5016 scope.go:117] "RemoveContainer" containerID="047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15" Oct 11 08:53:10 crc kubenswrapper[5016]: E1011 08:53:10.824496 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15\": container with ID starting with 047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15 not found: ID does not exist" containerID="047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.824528 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15"} err="failed to get container status \"047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15\": rpc error: code = NotFound desc = could not find container \"047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15\": container with ID starting with 047220bdfee4b9cafa530ae09823cf8acb23dc349f5aaff1448a28f574831b15 not found: ID does not exist" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.824554 5016 scope.go:117] "RemoveContainer" containerID="070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb" Oct 11 08:53:10 crc kubenswrapper[5016]: E1011 08:53:10.824830 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb\": container with ID starting with 070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb not found: ID does not exist" containerID="070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb" Oct 11 08:53:10 crc kubenswrapper[5016]: I1011 08:53:10.824860 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb"} err="failed to get container status \"070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb\": rpc error: code = NotFound desc = could not find container \"070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb\": container with ID starting with 070c0e9c94d13c27b3fcb305bab9cf6237dbd543cf84bccf48c8f7b53f4d7ceb not found: ID does not exist" Oct 11 08:53:11 crc kubenswrapper[5016]: I1011 08:53:11.164502 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" path="/var/lib/kubelet/pods/dc9ed4d4-4a44-4a4a-afbe-e90a1a969340/volumes" Oct 11 08:53:22 crc kubenswrapper[5016]: I1011 08:53:22.134627 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:53:22 crc kubenswrapper[5016]: E1011 08:53:22.136346 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:53:34 crc kubenswrapper[5016]: I1011 08:53:34.133239 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:53:34 crc kubenswrapper[5016]: E1011 08:53:34.134095 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:53:46 crc kubenswrapper[5016]: I1011 08:53:46.133744 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:53:46 crc kubenswrapper[5016]: E1011 08:53:46.134841 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:53:57 crc kubenswrapper[5016]: I1011 08:53:57.133484 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:53:57 crc kubenswrapper[5016]: E1011 08:53:57.134392 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:54:10 crc kubenswrapper[5016]: I1011 08:54:10.133211 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:54:10 crc kubenswrapper[5016]: E1011 08:54:10.133873 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:54:25 crc kubenswrapper[5016]: I1011 08:54:25.133476 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:54:25 crc kubenswrapper[5016]: E1011 08:54:25.135565 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:54:39 crc kubenswrapper[5016]: I1011 08:54:39.133367 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:54:39 crc kubenswrapper[5016]: E1011 08:54:39.134203 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.263539 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hj6tz"] Oct 11 08:54:48 crc kubenswrapper[5016]: E1011 08:54:48.264818 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerName="extract-utilities" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.264837 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerName="extract-utilities" Oct 11 08:54:48 crc kubenswrapper[5016]: E1011 08:54:48.264989 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerName="registry-server" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.264998 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerName="registry-server" Oct 11 08:54:48 crc kubenswrapper[5016]: E1011 08:54:48.265011 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerName="extract-content" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.265018 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerName="extract-content" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.265220 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9ed4d4-4a44-4a4a-afbe-e90a1a969340" containerName="registry-server" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.267651 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.294481 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hj6tz"] Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.324993 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spptb\" (UniqueName: \"kubernetes.io/projected/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-kube-api-access-spptb\") pod \"redhat-operators-hj6tz\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.326196 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-utilities\") pod \"redhat-operators-hj6tz\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.326432 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-catalog-content\") pod \"redhat-operators-hj6tz\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.430194 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spptb\" (UniqueName: \"kubernetes.io/projected/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-kube-api-access-spptb\") pod \"redhat-operators-hj6tz\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.430269 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-utilities\") pod \"redhat-operators-hj6tz\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.430304 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-catalog-content\") pod \"redhat-operators-hj6tz\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.431107 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-catalog-content\") pod \"redhat-operators-hj6tz\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.431159 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-utilities\") pod \"redhat-operators-hj6tz\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.458814 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spptb\" (UniqueName: \"kubernetes.io/projected/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-kube-api-access-spptb\") pod \"redhat-operators-hj6tz\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:48 crc kubenswrapper[5016]: I1011 08:54:48.593146 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:49 crc kubenswrapper[5016]: I1011 08:54:49.182730 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hj6tz"] Oct 11 08:54:49 crc kubenswrapper[5016]: I1011 08:54:49.687945 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hj6tz" event={"ID":"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3","Type":"ContainerStarted","Data":"48045f7c4e941849b8d759fc3d49d86e15725f483a8c66a17d1735fd0d766c2d"} Oct 11 08:54:50 crc kubenswrapper[5016]: I1011 08:54:50.707175 5016 generic.go:334] "Generic (PLEG): container finished" podID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerID="454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6" exitCode=0 Oct 11 08:54:50 crc kubenswrapper[5016]: I1011 08:54:50.707452 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hj6tz" event={"ID":"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3","Type":"ContainerDied","Data":"454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6"} Oct 11 08:54:50 crc kubenswrapper[5016]: I1011 08:54:50.710145 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 08:54:51 crc kubenswrapper[5016]: I1011 08:54:51.133935 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:54:51 crc kubenswrapper[5016]: E1011 08:54:51.134945 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:54:52 crc kubenswrapper[5016]: I1011 08:54:52.734264 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hj6tz" event={"ID":"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3","Type":"ContainerStarted","Data":"ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6"} Oct 11 08:54:53 crc kubenswrapper[5016]: I1011 08:54:53.748531 5016 generic.go:334] "Generic (PLEG): container finished" podID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerID="ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6" exitCode=0 Oct 11 08:54:53 crc kubenswrapper[5016]: I1011 08:54:53.748585 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hj6tz" event={"ID":"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3","Type":"ContainerDied","Data":"ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6"} Oct 11 08:54:55 crc kubenswrapper[5016]: I1011 08:54:55.766558 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hj6tz" event={"ID":"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3","Type":"ContainerStarted","Data":"8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5"} Oct 11 08:54:55 crc kubenswrapper[5016]: I1011 08:54:55.792918 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hj6tz" podStartSLOduration=3.858734176 podStartE2EDuration="7.792895089s" podCreationTimestamp="2025-10-11 08:54:48 +0000 UTC" firstStartedPulling="2025-10-11 08:54:50.70981211 +0000 UTC m=+4478.610268066" lastFinishedPulling="2025-10-11 08:54:54.643972993 +0000 UTC m=+4482.544428979" observedRunningTime="2025-10-11 08:54:55.786853358 +0000 UTC m=+4483.687309314" watchObservedRunningTime="2025-10-11 08:54:55.792895089 +0000 UTC m=+4483.693351035" Oct 11 08:54:58 crc kubenswrapper[5016]: I1011 08:54:58.593421 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:58 crc kubenswrapper[5016]: I1011 08:54:58.594109 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:54:59 crc kubenswrapper[5016]: I1011 08:54:59.643425 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hj6tz" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerName="registry-server" probeResult="failure" output=< Oct 11 08:54:59 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 08:54:59 crc kubenswrapper[5016]: > Oct 11 08:55:05 crc kubenswrapper[5016]: I1011 08:55:05.133336 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:55:05 crc kubenswrapper[5016]: E1011 08:55:05.134131 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:55:08 crc kubenswrapper[5016]: I1011 08:55:08.644935 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:55:08 crc kubenswrapper[5016]: I1011 08:55:08.694877 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:55:08 crc kubenswrapper[5016]: I1011 08:55:08.885829 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hj6tz"] Oct 11 08:55:09 crc kubenswrapper[5016]: I1011 08:55:09.887773 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hj6tz" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerName="registry-server" containerID="cri-o://8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5" gracePeriod=2 Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.646576 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.721300 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spptb\" (UniqueName: \"kubernetes.io/projected/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-kube-api-access-spptb\") pod \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.721554 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-utilities\") pod \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.721624 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-catalog-content\") pod \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\" (UID: \"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3\") " Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.722601 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-utilities" (OuterVolumeSpecName: "utilities") pod "5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" (UID: "5dc2a4cc-1700-4f22-a452-1614c6bd6ad3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.739905 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-kube-api-access-spptb" (OuterVolumeSpecName: "kube-api-access-spptb") pod "5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" (UID: "5dc2a4cc-1700-4f22-a452-1614c6bd6ad3"). InnerVolumeSpecName "kube-api-access-spptb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.809828 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" (UID: "5dc2a4cc-1700-4f22-a452-1614c6bd6ad3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.824433 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spptb\" (UniqueName: \"kubernetes.io/projected/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-kube-api-access-spptb\") on node \"crc\" DevicePath \"\"" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.824503 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.824515 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.898585 5016 generic.go:334] "Generic (PLEG): container finished" podID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerID="8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5" exitCode=0 Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.898625 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hj6tz" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.898647 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hj6tz" event={"ID":"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3","Type":"ContainerDied","Data":"8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5"} Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.898774 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hj6tz" event={"ID":"5dc2a4cc-1700-4f22-a452-1614c6bd6ad3","Type":"ContainerDied","Data":"48045f7c4e941849b8d759fc3d49d86e15725f483a8c66a17d1735fd0d766c2d"} Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.898817 5016 scope.go:117] "RemoveContainer" containerID="8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.918038 5016 scope.go:117] "RemoveContainer" containerID="ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6" Oct 11 08:55:10 crc kubenswrapper[5016]: I1011 08:55:10.942564 5016 scope.go:117] "RemoveContainer" containerID="454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6" Oct 11 08:55:11 crc kubenswrapper[5016]: I1011 08:55:11.007193 5016 scope.go:117] "RemoveContainer" containerID="8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5" Oct 11 08:55:11 crc kubenswrapper[5016]: E1011 08:55:11.007905 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5\": container with ID starting with 8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5 not found: ID does not exist" containerID="8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5" Oct 11 08:55:11 crc kubenswrapper[5016]: I1011 08:55:11.007958 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5"} err="failed to get container status \"8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5\": rpc error: code = NotFound desc = could not find container \"8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5\": container with ID starting with 8c21819091709050d5cfdb418d0be72e794fefc163921b009723435fa3991da5 not found: ID does not exist" Oct 11 08:55:11 crc kubenswrapper[5016]: I1011 08:55:11.007989 5016 scope.go:117] "RemoveContainer" containerID="ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6" Oct 11 08:55:11 crc kubenswrapper[5016]: E1011 08:55:11.008435 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6\": container with ID starting with ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6 not found: ID does not exist" containerID="ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6" Oct 11 08:55:11 crc kubenswrapper[5016]: I1011 08:55:11.008479 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6"} err="failed to get container status \"ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6\": rpc error: code = NotFound desc = could not find container \"ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6\": container with ID starting with ecf2f0adc78a57a86294b936ed4d9eee34c77bd30b6df6b7e5e2ac9c2eeb6fd6 not found: ID does not exist" Oct 11 08:55:11 crc kubenswrapper[5016]: I1011 08:55:11.008509 5016 scope.go:117] "RemoveContainer" containerID="454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6" Oct 11 08:55:11 crc kubenswrapper[5016]: E1011 08:55:11.008830 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6\": container with ID starting with 454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6 not found: ID does not exist" containerID="454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6" Oct 11 08:55:11 crc kubenswrapper[5016]: I1011 08:55:11.008857 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6"} err="failed to get container status \"454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6\": rpc error: code = NotFound desc = could not find container \"454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6\": container with ID starting with 454fa71219b1b878647fb6d2b3b397407a76f9c6b7400636fa1d7ebf0f5518b6 not found: ID does not exist" Oct 11 08:55:11 crc kubenswrapper[5016]: I1011 08:55:11.011870 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hj6tz"] Oct 11 08:55:11 crc kubenswrapper[5016]: I1011 08:55:11.019718 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hj6tz"] Oct 11 08:55:11 crc kubenswrapper[5016]: I1011 08:55:11.146281 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" path="/var/lib/kubelet/pods/5dc2a4cc-1700-4f22-a452-1614c6bd6ad3/volumes" Oct 11 08:55:16 crc kubenswrapper[5016]: I1011 08:55:16.133505 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:55:16 crc kubenswrapper[5016]: E1011 08:55:16.134639 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:55:28 crc kubenswrapper[5016]: I1011 08:55:28.134278 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:55:28 crc kubenswrapper[5016]: E1011 08:55:28.135270 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:55:39 crc kubenswrapper[5016]: I1011 08:55:39.134620 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:55:39 crc kubenswrapper[5016]: E1011 08:55:39.135720 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:55:50 crc kubenswrapper[5016]: I1011 08:55:50.133519 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:55:50 crc kubenswrapper[5016]: E1011 08:55:50.134737 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:56:01 crc kubenswrapper[5016]: I1011 08:56:01.134420 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:56:01 crc kubenswrapper[5016]: E1011 08:56:01.135242 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 08:56:12 crc kubenswrapper[5016]: I1011 08:56:12.134462 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:56:13 crc kubenswrapper[5016]: I1011 08:56:13.566352 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"8a5d6c46f82632e3a479f2652b8d0209efc03409024ba6cc0d55fd1a401195d3"} Oct 11 08:58:37 crc kubenswrapper[5016]: I1011 08:58:37.122589 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:58:37 crc kubenswrapper[5016]: I1011 08:58:37.123632 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:59:07 crc kubenswrapper[5016]: I1011 08:59:07.123063 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:59:07 crc kubenswrapper[5016]: I1011 08:59:07.124268 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:59:37 crc kubenswrapper[5016]: I1011 08:59:37.122066 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 08:59:37 crc kubenswrapper[5016]: I1011 08:59:37.123204 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 08:59:37 crc kubenswrapper[5016]: I1011 08:59:37.123273 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 08:59:37 crc kubenswrapper[5016]: I1011 08:59:37.124500 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a5d6c46f82632e3a479f2652b8d0209efc03409024ba6cc0d55fd1a401195d3"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 08:59:37 crc kubenswrapper[5016]: I1011 08:59:37.124588 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://8a5d6c46f82632e3a479f2652b8d0209efc03409024ba6cc0d55fd1a401195d3" gracePeriod=600 Oct 11 08:59:37 crc kubenswrapper[5016]: I1011 08:59:37.931275 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="8a5d6c46f82632e3a479f2652b8d0209efc03409024ba6cc0d55fd1a401195d3" exitCode=0 Oct 11 08:59:37 crc kubenswrapper[5016]: I1011 08:59:37.931366 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"8a5d6c46f82632e3a479f2652b8d0209efc03409024ba6cc0d55fd1a401195d3"} Oct 11 08:59:37 crc kubenswrapper[5016]: I1011 08:59:37.931891 5016 scope.go:117] "RemoveContainer" containerID="aee135b91bffecdb1fefbf8fd96168a74e33fa0f1fed82c4e98678982613e409" Oct 11 08:59:38 crc kubenswrapper[5016]: I1011 08:59:38.946690 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955"} Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.042399 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2dxz8"] Oct 11 08:59:42 crc kubenswrapper[5016]: E1011 08:59:42.045341 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerName="registry-server" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.045361 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerName="registry-server" Oct 11 08:59:42 crc kubenswrapper[5016]: E1011 08:59:42.045580 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerName="extract-utilities" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.045591 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerName="extract-utilities" Oct 11 08:59:42 crc kubenswrapper[5016]: E1011 08:59:42.045635 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerName="extract-content" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.045643 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerName="extract-content" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.046972 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc2a4cc-1700-4f22-a452-1614c6bd6ad3" containerName="registry-server" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.051394 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.057410 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dxz8"] Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.167955 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-utilities\") pod \"certified-operators-2dxz8\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.168113 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-catalog-content\") pod \"certified-operators-2dxz8\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.168244 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5hg4\" (UniqueName: \"kubernetes.io/projected/d26202af-af0a-4dde-94f6-7ef42b256799-kube-api-access-d5hg4\") pod \"certified-operators-2dxz8\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.271908 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-utilities\") pod \"certified-operators-2dxz8\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.272058 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-catalog-content\") pod \"certified-operators-2dxz8\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.272089 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5hg4\" (UniqueName: \"kubernetes.io/projected/d26202af-af0a-4dde-94f6-7ef42b256799-kube-api-access-d5hg4\") pod \"certified-operators-2dxz8\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.272521 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-utilities\") pod \"certified-operators-2dxz8\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.273197 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-catalog-content\") pod \"certified-operators-2dxz8\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.302044 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5hg4\" (UniqueName: \"kubernetes.io/projected/d26202af-af0a-4dde-94f6-7ef42b256799-kube-api-access-d5hg4\") pod \"certified-operators-2dxz8\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:42 crc kubenswrapper[5016]: I1011 08:59:42.395014 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.050929 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8q99c"] Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.076666 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.117045 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dxz8"] Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.127513 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8q99c"] Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.210234 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-catalog-content\") pod \"community-operators-8q99c\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.210454 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlw9r\" (UniqueName: \"kubernetes.io/projected/6531730b-f7a8-4e31-b142-ec986d745af4-kube-api-access-rlw9r\") pod \"community-operators-8q99c\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.210807 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-utilities\") pod \"community-operators-8q99c\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.313250 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-catalog-content\") pod \"community-operators-8q99c\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.313564 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlw9r\" (UniqueName: \"kubernetes.io/projected/6531730b-f7a8-4e31-b142-ec986d745af4-kube-api-access-rlw9r\") pod \"community-operators-8q99c\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.314273 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-utilities\") pod \"community-operators-8q99c\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.315019 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-catalog-content\") pod \"community-operators-8q99c\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.315141 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-utilities\") pod \"community-operators-8q99c\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.335929 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlw9r\" (UniqueName: \"kubernetes.io/projected/6531730b-f7a8-4e31-b142-ec986d745af4-kube-api-access-rlw9r\") pod \"community-operators-8q99c\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.423481 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:43 crc kubenswrapper[5016]: I1011 08:59:43.924749 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8q99c"] Oct 11 08:59:43 crc kubenswrapper[5016]: W1011 08:59:43.930258 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6531730b_f7a8_4e31_b142_ec986d745af4.slice/crio-038a92ec8639c89bb52a576471c52cd5ba56e4a04b40ad98218da8961ff90056 WatchSource:0}: Error finding container 038a92ec8639c89bb52a576471c52cd5ba56e4a04b40ad98218da8961ff90056: Status 404 returned error can't find the container with id 038a92ec8639c89bb52a576471c52cd5ba56e4a04b40ad98218da8961ff90056 Oct 11 08:59:44 crc kubenswrapper[5016]: I1011 08:59:44.009286 5016 generic.go:334] "Generic (PLEG): container finished" podID="d26202af-af0a-4dde-94f6-7ef42b256799" containerID="6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92" exitCode=0 Oct 11 08:59:44 crc kubenswrapper[5016]: I1011 08:59:44.009404 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dxz8" event={"ID":"d26202af-af0a-4dde-94f6-7ef42b256799","Type":"ContainerDied","Data":"6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92"} Oct 11 08:59:44 crc kubenswrapper[5016]: I1011 08:59:44.009451 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dxz8" event={"ID":"d26202af-af0a-4dde-94f6-7ef42b256799","Type":"ContainerStarted","Data":"52511b0143de29dd3e5147d6275bcfd7157024988f872f1d3f853c0c7d11ec1d"} Oct 11 08:59:44 crc kubenswrapper[5016]: I1011 08:59:44.014173 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q99c" event={"ID":"6531730b-f7a8-4e31-b142-ec986d745af4","Type":"ContainerStarted","Data":"038a92ec8639c89bb52a576471c52cd5ba56e4a04b40ad98218da8961ff90056"} Oct 11 08:59:45 crc kubenswrapper[5016]: I1011 08:59:45.027410 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dxz8" event={"ID":"d26202af-af0a-4dde-94f6-7ef42b256799","Type":"ContainerStarted","Data":"0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949"} Oct 11 08:59:45 crc kubenswrapper[5016]: I1011 08:59:45.030131 5016 generic.go:334] "Generic (PLEG): container finished" podID="6531730b-f7a8-4e31-b142-ec986d745af4" containerID="088bc4837a4ab30a9fc5c593d42c9a875861ef6c5e7a5cac407181abfa408365" exitCode=0 Oct 11 08:59:45 crc kubenswrapper[5016]: I1011 08:59:45.030201 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q99c" event={"ID":"6531730b-f7a8-4e31-b142-ec986d745af4","Type":"ContainerDied","Data":"088bc4837a4ab30a9fc5c593d42c9a875861ef6c5e7a5cac407181abfa408365"} Oct 11 08:59:46 crc kubenswrapper[5016]: I1011 08:59:46.046043 5016 generic.go:334] "Generic (PLEG): container finished" podID="d26202af-af0a-4dde-94f6-7ef42b256799" containerID="0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949" exitCode=0 Oct 11 08:59:46 crc kubenswrapper[5016]: I1011 08:59:46.046202 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dxz8" event={"ID":"d26202af-af0a-4dde-94f6-7ef42b256799","Type":"ContainerDied","Data":"0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949"} Oct 11 08:59:46 crc kubenswrapper[5016]: I1011 08:59:46.053183 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q99c" event={"ID":"6531730b-f7a8-4e31-b142-ec986d745af4","Type":"ContainerStarted","Data":"4327c51179e4c9e56fa8adae2389cdc2f591e79bc811f8f8850230720387431a"} Oct 11 08:59:47 crc kubenswrapper[5016]: I1011 08:59:47.069588 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dxz8" event={"ID":"d26202af-af0a-4dde-94f6-7ef42b256799","Type":"ContainerStarted","Data":"5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26"} Oct 11 08:59:47 crc kubenswrapper[5016]: I1011 08:59:47.075267 5016 generic.go:334] "Generic (PLEG): container finished" podID="6531730b-f7a8-4e31-b142-ec986d745af4" containerID="4327c51179e4c9e56fa8adae2389cdc2f591e79bc811f8f8850230720387431a" exitCode=0 Oct 11 08:59:47 crc kubenswrapper[5016]: I1011 08:59:47.075453 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q99c" event={"ID":"6531730b-f7a8-4e31-b142-ec986d745af4","Type":"ContainerDied","Data":"4327c51179e4c9e56fa8adae2389cdc2f591e79bc811f8f8850230720387431a"} Oct 11 08:59:47 crc kubenswrapper[5016]: I1011 08:59:47.092499 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2dxz8" podStartSLOduration=2.522163662 podStartE2EDuration="5.092477119s" podCreationTimestamp="2025-10-11 08:59:42 +0000 UTC" firstStartedPulling="2025-10-11 08:59:44.011518655 +0000 UTC m=+4771.911974641" lastFinishedPulling="2025-10-11 08:59:46.581832152 +0000 UTC m=+4774.482288098" observedRunningTime="2025-10-11 08:59:47.088882183 +0000 UTC m=+4774.989338169" watchObservedRunningTime="2025-10-11 08:59:47.092477119 +0000 UTC m=+4774.992933075" Oct 11 08:59:48 crc kubenswrapper[5016]: I1011 08:59:48.088701 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q99c" event={"ID":"6531730b-f7a8-4e31-b142-ec986d745af4","Type":"ContainerStarted","Data":"cc810d935bdee7a9900661884f6f55e77383b7169f2eedd92407e6e06f6d6f07"} Oct 11 08:59:48 crc kubenswrapper[5016]: I1011 08:59:48.109434 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8q99c" podStartSLOduration=2.692210632 podStartE2EDuration="5.109412917s" podCreationTimestamp="2025-10-11 08:59:43 +0000 UTC" firstStartedPulling="2025-10-11 08:59:45.033906298 +0000 UTC m=+4772.934362244" lastFinishedPulling="2025-10-11 08:59:47.451108573 +0000 UTC m=+4775.351564529" observedRunningTime="2025-10-11 08:59:48.106271834 +0000 UTC m=+4776.006727790" watchObservedRunningTime="2025-10-11 08:59:48.109412917 +0000 UTC m=+4776.009868863" Oct 11 08:59:52 crc kubenswrapper[5016]: I1011 08:59:52.395581 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:52 crc kubenswrapper[5016]: I1011 08:59:52.396790 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:52 crc kubenswrapper[5016]: I1011 08:59:52.452787 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:53 crc kubenswrapper[5016]: I1011 08:59:53.210942 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:53 crc kubenswrapper[5016]: I1011 08:59:53.273071 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dxz8"] Oct 11 08:59:53 crc kubenswrapper[5016]: I1011 08:59:53.423756 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:53 crc kubenswrapper[5016]: I1011 08:59:53.423807 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:53 crc kubenswrapper[5016]: I1011 08:59:53.473279 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:54 crc kubenswrapper[5016]: I1011 08:59:54.225004 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.167869 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2dxz8" podUID="d26202af-af0a-4dde-94f6-7ef42b256799" containerName="registry-server" containerID="cri-o://5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26" gracePeriod=2 Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.623225 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8q99c"] Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.781451 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.850518 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-catalog-content\") pod \"d26202af-af0a-4dde-94f6-7ef42b256799\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.850719 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5hg4\" (UniqueName: \"kubernetes.io/projected/d26202af-af0a-4dde-94f6-7ef42b256799-kube-api-access-d5hg4\") pod \"d26202af-af0a-4dde-94f6-7ef42b256799\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.850756 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-utilities\") pod \"d26202af-af0a-4dde-94f6-7ef42b256799\" (UID: \"d26202af-af0a-4dde-94f6-7ef42b256799\") " Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.851788 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-utilities" (OuterVolumeSpecName: "utilities") pod "d26202af-af0a-4dde-94f6-7ef42b256799" (UID: "d26202af-af0a-4dde-94f6-7ef42b256799"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.859852 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26202af-af0a-4dde-94f6-7ef42b256799-kube-api-access-d5hg4" (OuterVolumeSpecName: "kube-api-access-d5hg4") pod "d26202af-af0a-4dde-94f6-7ef42b256799" (UID: "d26202af-af0a-4dde-94f6-7ef42b256799"). InnerVolumeSpecName "kube-api-access-d5hg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.909080 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d26202af-af0a-4dde-94f6-7ef42b256799" (UID: "d26202af-af0a-4dde-94f6-7ef42b256799"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.953876 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.953925 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5hg4\" (UniqueName: \"kubernetes.io/projected/d26202af-af0a-4dde-94f6-7ef42b256799-kube-api-access-d5hg4\") on node \"crc\" DevicePath \"\"" Oct 11 08:59:55 crc kubenswrapper[5016]: I1011 08:59:55.953939 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d26202af-af0a-4dde-94f6-7ef42b256799-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.182478 5016 generic.go:334] "Generic (PLEG): container finished" podID="d26202af-af0a-4dde-94f6-7ef42b256799" containerID="5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26" exitCode=0 Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.182558 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dxz8" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.182599 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dxz8" event={"ID":"d26202af-af0a-4dde-94f6-7ef42b256799","Type":"ContainerDied","Data":"5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26"} Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.183031 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dxz8" event={"ID":"d26202af-af0a-4dde-94f6-7ef42b256799","Type":"ContainerDied","Data":"52511b0143de29dd3e5147d6275bcfd7157024988f872f1d3f853c0c7d11ec1d"} Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.183056 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8q99c" podUID="6531730b-f7a8-4e31-b142-ec986d745af4" containerName="registry-server" containerID="cri-o://cc810d935bdee7a9900661884f6f55e77383b7169f2eedd92407e6e06f6d6f07" gracePeriod=2 Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.183078 5016 scope.go:117] "RemoveContainer" containerID="5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.229713 5016 scope.go:117] "RemoveContainer" containerID="0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.331710 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2dxz8"] Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.331925 5016 scope.go:117] "RemoveContainer" containerID="6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.341294 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2dxz8"] Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.418291 5016 scope.go:117] "RemoveContainer" containerID="5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26" Oct 11 08:59:56 crc kubenswrapper[5016]: E1011 08:59:56.419058 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26\": container with ID starting with 5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26 not found: ID does not exist" containerID="5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.419120 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26"} err="failed to get container status \"5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26\": rpc error: code = NotFound desc = could not find container \"5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26\": container with ID starting with 5d390e134881fd19d7d0371985fff1e59c481f75918287972ecb03f938c38f26 not found: ID does not exist" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.419154 5016 scope.go:117] "RemoveContainer" containerID="0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949" Oct 11 08:59:56 crc kubenswrapper[5016]: E1011 08:59:56.419811 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949\": container with ID starting with 0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949 not found: ID does not exist" containerID="0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.419857 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949"} err="failed to get container status \"0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949\": rpc error: code = NotFound desc = could not find container \"0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949\": container with ID starting with 0f5c0842ea01609fc0386664d03d4a6299351d2dca374491982882c987e08949 not found: ID does not exist" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.419892 5016 scope.go:117] "RemoveContainer" containerID="6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92" Oct 11 08:59:56 crc kubenswrapper[5016]: E1011 08:59:56.420308 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92\": container with ID starting with 6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92 not found: ID does not exist" containerID="6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92" Oct 11 08:59:56 crc kubenswrapper[5016]: I1011 08:59:56.420342 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92"} err="failed to get container status \"6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92\": rpc error: code = NotFound desc = could not find container \"6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92\": container with ID starting with 6c8dbb3430aa7121a4e2660b48dd426f5c29459cd697fa1f1528fd93aa283f92 not found: ID does not exist" Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.148977 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d26202af-af0a-4dde-94f6-7ef42b256799" path="/var/lib/kubelet/pods/d26202af-af0a-4dde-94f6-7ef42b256799/volumes" Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.200454 5016 generic.go:334] "Generic (PLEG): container finished" podID="6531730b-f7a8-4e31-b142-ec986d745af4" containerID="cc810d935bdee7a9900661884f6f55e77383b7169f2eedd92407e6e06f6d6f07" exitCode=0 Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.200591 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q99c" event={"ID":"6531730b-f7a8-4e31-b142-ec986d745af4","Type":"ContainerDied","Data":"cc810d935bdee7a9900661884f6f55e77383b7169f2eedd92407e6e06f6d6f07"} Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.200723 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8q99c" event={"ID":"6531730b-f7a8-4e31-b142-ec986d745af4","Type":"ContainerDied","Data":"038a92ec8639c89bb52a576471c52cd5ba56e4a04b40ad98218da8961ff90056"} Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.200813 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="038a92ec8639c89bb52a576471c52cd5ba56e4a04b40ad98218da8961ff90056" Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.225010 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.288916 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-catalog-content\") pod \"6531730b-f7a8-4e31-b142-ec986d745af4\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.289065 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlw9r\" (UniqueName: \"kubernetes.io/projected/6531730b-f7a8-4e31-b142-ec986d745af4-kube-api-access-rlw9r\") pod \"6531730b-f7a8-4e31-b142-ec986d745af4\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.289338 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-utilities\") pod \"6531730b-f7a8-4e31-b142-ec986d745af4\" (UID: \"6531730b-f7a8-4e31-b142-ec986d745af4\") " Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.291437 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-utilities" (OuterVolumeSpecName: "utilities") pod "6531730b-f7a8-4e31-b142-ec986d745af4" (UID: "6531730b-f7a8-4e31-b142-ec986d745af4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.315934 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6531730b-f7a8-4e31-b142-ec986d745af4-kube-api-access-rlw9r" (OuterVolumeSpecName: "kube-api-access-rlw9r") pod "6531730b-f7a8-4e31-b142-ec986d745af4" (UID: "6531730b-f7a8-4e31-b142-ec986d745af4"). InnerVolumeSpecName "kube-api-access-rlw9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.353026 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6531730b-f7a8-4e31-b142-ec986d745af4" (UID: "6531730b-f7a8-4e31-b142-ec986d745af4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.393404 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.393437 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlw9r\" (UniqueName: \"kubernetes.io/projected/6531730b-f7a8-4e31-b142-ec986d745af4-kube-api-access-rlw9r\") on node \"crc\" DevicePath \"\"" Oct 11 08:59:57 crc kubenswrapper[5016]: I1011 08:59:57.393450 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6531730b-f7a8-4e31-b142-ec986d745af4-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 08:59:58 crc kubenswrapper[5016]: I1011 08:59:58.214504 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8q99c" Oct 11 08:59:58 crc kubenswrapper[5016]: I1011 08:59:58.278391 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8q99c"] Oct 11 08:59:58 crc kubenswrapper[5016]: I1011 08:59:58.291220 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8q99c"] Oct 11 08:59:59 crc kubenswrapper[5016]: I1011 08:59:59.145785 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6531730b-f7a8-4e31-b142-ec986d745af4" path="/var/lib/kubelet/pods/6531730b-f7a8-4e31-b142-ec986d745af4/volumes" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.161289 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk"] Oct 11 09:00:00 crc kubenswrapper[5016]: E1011 09:00:00.161835 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26202af-af0a-4dde-94f6-7ef42b256799" containerName="extract-utilities" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.161858 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26202af-af0a-4dde-94f6-7ef42b256799" containerName="extract-utilities" Oct 11 09:00:00 crc kubenswrapper[5016]: E1011 09:00:00.161887 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26202af-af0a-4dde-94f6-7ef42b256799" containerName="extract-content" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.161897 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26202af-af0a-4dde-94f6-7ef42b256799" containerName="extract-content" Oct 11 09:00:00 crc kubenswrapper[5016]: E1011 09:00:00.161912 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6531730b-f7a8-4e31-b142-ec986d745af4" containerName="registry-server" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.161923 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6531730b-f7a8-4e31-b142-ec986d745af4" containerName="registry-server" Oct 11 09:00:00 crc kubenswrapper[5016]: E1011 09:00:00.161942 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6531730b-f7a8-4e31-b142-ec986d745af4" containerName="extract-utilities" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.161951 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6531730b-f7a8-4e31-b142-ec986d745af4" containerName="extract-utilities" Oct 11 09:00:00 crc kubenswrapper[5016]: E1011 09:00:00.161968 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d26202af-af0a-4dde-94f6-7ef42b256799" containerName="registry-server" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.161976 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d26202af-af0a-4dde-94f6-7ef42b256799" containerName="registry-server" Oct 11 09:00:00 crc kubenswrapper[5016]: E1011 09:00:00.161990 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6531730b-f7a8-4e31-b142-ec986d745af4" containerName="extract-content" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.161997 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6531730b-f7a8-4e31-b142-ec986d745af4" containerName="extract-content" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.162225 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d26202af-af0a-4dde-94f6-7ef42b256799" containerName="registry-server" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.162248 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="6531730b-f7a8-4e31-b142-ec986d745af4" containerName="registry-server" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.163185 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.167231 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.167984 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.179052 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk"] Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.271332 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqshs\" (UniqueName: \"kubernetes.io/projected/6dd8f950-c385-47f7-8162-cea724b383e9-kube-api-access-cqshs\") pod \"collect-profiles-29336220-l2frk\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.271414 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dd8f950-c385-47f7-8162-cea724b383e9-config-volume\") pod \"collect-profiles-29336220-l2frk\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.271441 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6dd8f950-c385-47f7-8162-cea724b383e9-secret-volume\") pod \"collect-profiles-29336220-l2frk\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.374700 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqshs\" (UniqueName: \"kubernetes.io/projected/6dd8f950-c385-47f7-8162-cea724b383e9-kube-api-access-cqshs\") pod \"collect-profiles-29336220-l2frk\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.374854 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dd8f950-c385-47f7-8162-cea724b383e9-config-volume\") pod \"collect-profiles-29336220-l2frk\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.374908 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6dd8f950-c385-47f7-8162-cea724b383e9-secret-volume\") pod \"collect-profiles-29336220-l2frk\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.376468 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dd8f950-c385-47f7-8162-cea724b383e9-config-volume\") pod \"collect-profiles-29336220-l2frk\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.389487 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6dd8f950-c385-47f7-8162-cea724b383e9-secret-volume\") pod \"collect-profiles-29336220-l2frk\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.393300 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqshs\" (UniqueName: \"kubernetes.io/projected/6dd8f950-c385-47f7-8162-cea724b383e9-kube-api-access-cqshs\") pod \"collect-profiles-29336220-l2frk\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:00 crc kubenswrapper[5016]: I1011 09:00:00.513203 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:01 crc kubenswrapper[5016]: I1011 09:00:01.040424 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk"] Oct 11 09:00:01 crc kubenswrapper[5016]: I1011 09:00:01.243009 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" event={"ID":"6dd8f950-c385-47f7-8162-cea724b383e9","Type":"ContainerStarted","Data":"6c9c4d616942010cdaf0a56c6d4dbe14518af8689ddfd18cf6cae54eff1d476b"} Oct 11 09:00:01 crc kubenswrapper[5016]: I1011 09:00:01.243066 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" event={"ID":"6dd8f950-c385-47f7-8162-cea724b383e9","Type":"ContainerStarted","Data":"108d508879a4ed263220207ce361c20bd0b58b23d1a8a1fd17f21f40ec48f975"} Oct 11 09:00:01 crc kubenswrapper[5016]: I1011 09:00:01.265742 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" podStartSLOduration=1.265642183 podStartE2EDuration="1.265642183s" podCreationTimestamp="2025-10-11 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 09:00:01.256351096 +0000 UTC m=+4789.156807042" watchObservedRunningTime="2025-10-11 09:00:01.265642183 +0000 UTC m=+4789.166098129" Oct 11 09:00:02 crc kubenswrapper[5016]: I1011 09:00:02.259394 5016 generic.go:334] "Generic (PLEG): container finished" podID="6dd8f950-c385-47f7-8162-cea724b383e9" containerID="6c9c4d616942010cdaf0a56c6d4dbe14518af8689ddfd18cf6cae54eff1d476b" exitCode=0 Oct 11 09:00:02 crc kubenswrapper[5016]: I1011 09:00:02.259591 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" event={"ID":"6dd8f950-c385-47f7-8162-cea724b383e9","Type":"ContainerDied","Data":"6c9c4d616942010cdaf0a56c6d4dbe14518af8689ddfd18cf6cae54eff1d476b"} Oct 11 09:00:03 crc kubenswrapper[5016]: I1011 09:00:03.909056 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.081785 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6dd8f950-c385-47f7-8162-cea724b383e9-secret-volume\") pod \"6dd8f950-c385-47f7-8162-cea724b383e9\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.081990 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqshs\" (UniqueName: \"kubernetes.io/projected/6dd8f950-c385-47f7-8162-cea724b383e9-kube-api-access-cqshs\") pod \"6dd8f950-c385-47f7-8162-cea724b383e9\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.082160 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dd8f950-c385-47f7-8162-cea724b383e9-config-volume\") pod \"6dd8f950-c385-47f7-8162-cea724b383e9\" (UID: \"6dd8f950-c385-47f7-8162-cea724b383e9\") " Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.085428 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dd8f950-c385-47f7-8162-cea724b383e9-config-volume" (OuterVolumeSpecName: "config-volume") pod "6dd8f950-c385-47f7-8162-cea724b383e9" (UID: "6dd8f950-c385-47f7-8162-cea724b383e9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.093269 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dd8f950-c385-47f7-8162-cea724b383e9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6dd8f950-c385-47f7-8162-cea724b383e9" (UID: "6dd8f950-c385-47f7-8162-cea724b383e9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.107312 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dd8f950-c385-47f7-8162-cea724b383e9-kube-api-access-cqshs" (OuterVolumeSpecName: "kube-api-access-cqshs") pod "6dd8f950-c385-47f7-8162-cea724b383e9" (UID: "6dd8f950-c385-47f7-8162-cea724b383e9"). InnerVolumeSpecName "kube-api-access-cqshs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.185527 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqshs\" (UniqueName: \"kubernetes.io/projected/6dd8f950-c385-47f7-8162-cea724b383e9-kube-api-access-cqshs\") on node \"crc\" DevicePath \"\"" Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.185898 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dd8f950-c385-47f7-8162-cea724b383e9-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.185968 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6dd8f950-c385-47f7-8162-cea724b383e9-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.282476 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" event={"ID":"6dd8f950-c385-47f7-8162-cea724b383e9","Type":"ContainerDied","Data":"108d508879a4ed263220207ce361c20bd0b58b23d1a8a1fd17f21f40ec48f975"} Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.282956 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="108d508879a4ed263220207ce361c20bd0b58b23d1a8a1fd17f21f40ec48f975" Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.282588 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk" Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.358502 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j"] Oct 11 09:00:04 crc kubenswrapper[5016]: I1011 09:00:04.368067 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336175-7c25j"] Oct 11 09:00:05 crc kubenswrapper[5016]: I1011 09:00:05.157881 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284d8b10-4a92-482a-b882-5cb28395f892" path="/var/lib/kubelet/pods/284d8b10-4a92-482a-b882-5cb28395f892/volumes" Oct 11 09:00:30 crc kubenswrapper[5016]: I1011 09:00:30.654059 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:00:31 crc kubenswrapper[5016]: I1011 09:00:31.640986 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.241:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:00:31 crc kubenswrapper[5016]: I1011 09:00:31.641024 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:00:35 crc kubenswrapper[5016]: I1011 09:00:35.696055 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:00:36 crc kubenswrapper[5016]: I1011 09:00:36.724997 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.241:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:00:36 crc kubenswrapper[5016]: I1011 09:00:36.725348 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:00:42 crc kubenswrapper[5016]: I1011 09:00:42.808324 5016 scope.go:117] "RemoveContainer" containerID="84bb3de699edd10bc653b220d54bba0083c10cc30c8b5a3ea3cb82e6171473b2" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.166163 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29336221-cqpgd"] Oct 11 09:01:00 crc kubenswrapper[5016]: E1011 09:01:00.167544 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dd8f950-c385-47f7-8162-cea724b383e9" containerName="collect-profiles" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.167570 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dd8f950-c385-47f7-8162-cea724b383e9" containerName="collect-profiles" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.168039 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dd8f950-c385-47f7-8162-cea724b383e9" containerName="collect-profiles" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.170512 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.199014 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29336221-cqpgd"] Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.305377 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-config-data\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.305495 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-combined-ca-bundle\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.305543 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-fernet-keys\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.305605 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzkv\" (UniqueName: \"kubernetes.io/projected/fd20c922-e2e3-4d45-a4f7-559030c97500-kube-api-access-ndzkv\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.407799 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-config-data\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.407933 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-combined-ca-bundle\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.407999 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-fernet-keys\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.408076 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzkv\" (UniqueName: \"kubernetes.io/projected/fd20c922-e2e3-4d45-a4f7-559030c97500-kube-api-access-ndzkv\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.420607 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-combined-ca-bundle\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.420974 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-fernet-keys\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.428776 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-config-data\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.438057 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzkv\" (UniqueName: \"kubernetes.io/projected/fd20c922-e2e3-4d45-a4f7-559030c97500-kube-api-access-ndzkv\") pod \"keystone-cron-29336221-cqpgd\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:00 crc kubenswrapper[5016]: I1011 09:01:00.549469 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:01:01 crc kubenswrapper[5016]: I1011 09:01:01.210133 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29336221-cqpgd"] Oct 11 09:01:01 crc kubenswrapper[5016]: I1011 09:01:01.949318 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336221-cqpgd" event={"ID":"fd20c922-e2e3-4d45-a4f7-559030c97500","Type":"ContainerStarted","Data":"75f1c1ec7d35cff9fd2a0b6323d5998c843627c91804ccf3fee62d677a5a94ad"} Oct 11 09:01:08 crc kubenswrapper[5016]: I1011 09:01:08.918676 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Oct 11 09:01:09 crc kubenswrapper[5016]: I1011 09:01:09.657091 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:10 crc kubenswrapper[5016]: I1011 09:01:10.639982 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:10 crc kubenswrapper[5016]: I1011 09:01:10.640024 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.241:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:13 crc kubenswrapper[5016]: I1011 09:01:13.910963 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Oct 11 09:01:14 crc kubenswrapper[5016]: I1011 09:01:14.700033 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:15 crc kubenswrapper[5016]: I1011 09:01:15.722971 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.241:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:15 crc kubenswrapper[5016]: I1011 09:01:15.723003 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:18 crc kubenswrapper[5016]: I1011 09:01:18.910916 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Oct 11 09:01:18 crc kubenswrapper[5016]: I1011 09:01:18.911951 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Oct 11 09:01:18 crc kubenswrapper[5016]: I1011 09:01:18.913522 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"ad0480fdb1e52e675b944bab407e3e4dd0baad19c902d27893361774d4054179"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Oct 11 09:01:18 crc kubenswrapper[5016]: I1011 09:01:18.913729 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-central-agent" containerID="cri-o://ad0480fdb1e52e675b944bab407e3e4dd0baad19c902d27893361774d4054179" gracePeriod=30 Oct 11 09:01:19 crc kubenswrapper[5016]: I1011 09:01:19.741998 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:19 crc kubenswrapper[5016]: I1011 09:01:19.742600 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 11 09:01:19 crc kubenswrapper[5016]: I1011 09:01:19.743855 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"e0835f74268868e54c884e3f7f5f6ec3663ed39fc88e0a94c4f71b6ed100fa6f"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Oct 11 09:01:19 crc kubenswrapper[5016]: I1011 09:01:19.743954 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" containerID="cri-o://e0835f74268868e54c884e3f7f5f6ec3663ed39fc88e0a94c4f71b6ed100fa6f" gracePeriod=30 Oct 11 09:01:20 crc kubenswrapper[5016]: I1011 09:01:20.807005 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.241:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:20 crc kubenswrapper[5016]: I1011 09:01:20.807179 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-backup-0" Oct 11 09:01:20 crc kubenswrapper[5016]: I1011 09:01:20.807128 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:20 crc kubenswrapper[5016]: I1011 09:01:20.807314 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 11 09:01:20 crc kubenswrapper[5016]: I1011 09:01:20.808581 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-backup" containerStatusID={"Type":"cri-o","ID":"c4659800ec4031246103ddac7b65f784b0f8588c3ff837fef79d267236f56ef7"} pod="openstack/cinder-backup-0" containerMessage="Container cinder-backup failed liveness probe, will be restarted" Oct 11 09:01:20 crc kubenswrapper[5016]: I1011 09:01:20.808707 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" containerID="cri-o://c4659800ec4031246103ddac7b65f784b0f8588c3ff837fef79d267236f56ef7" gracePeriod=30 Oct 11 09:01:20 crc kubenswrapper[5016]: I1011 09:01:20.809932 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-volume" containerStatusID={"Type":"cri-o","ID":"3258572cd236398aefaffaa860219bc466c2f0c8d40852815ec22774bc883d53"} pod="openstack/cinder-volume-volume1-0" containerMessage="Container cinder-volume failed liveness probe, will be restarted" Oct 11 09:01:20 crc kubenswrapper[5016]: I1011 09:01:20.810140 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" containerID="cri-o://3258572cd236398aefaffaa860219bc466c2f0c8d40852815ec22774bc883d53" gracePeriod=30 Oct 11 09:01:25 crc kubenswrapper[5016]: I1011 09:01:25.929921 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="055e76cd-8fd8-437e-a065-6d64398ce2dd" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.0.255:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:28 crc kubenswrapper[5016]: I1011 09:01:28.909635 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Oct 11 09:01:32 crc kubenswrapper[5016]: I1011 09:01:32.123264 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="99758dd3-4691-42ed-a3eb-aead6855e030" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.0:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:39 crc kubenswrapper[5016]: I1011 09:01:39.884869 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:39 crc kubenswrapper[5016]: I1011 09:01:39.884888 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:39 crc kubenswrapper[5016]: I1011 09:01:39.884921 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:39 crc kubenswrapper[5016]: I1011 09:01:39.884870 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:45 crc kubenswrapper[5016]: I1011 09:01:45.973008 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="055e76cd-8fd8-437e-a065-6d64398ce2dd" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.0.255:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:52 crc kubenswrapper[5016]: I1011 09:01:52.166930 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="99758dd3-4691-42ed-a3eb-aead6855e030" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.0:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:01:57 crc kubenswrapper[5016]: I1011 09:01:57.445887 5016 generic.go:334] "Generic (PLEG): container finished" podID="14ae562e-2b57-478f-89cd-8330105eacdf" containerID="e0835f74268868e54c884e3f7f5f6ec3663ed39fc88e0a94c4f71b6ed100fa6f" exitCode=-1 Oct 11 09:01:57 crc kubenswrapper[5016]: I1011 09:01:57.446029 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"14ae562e-2b57-478f-89cd-8330105eacdf","Type":"ContainerDied","Data":"e0835f74268868e54c884e3f7f5f6ec3663ed39fc88e0a94c4f71b6ed100fa6f"} Oct 11 09:01:58 crc kubenswrapper[5016]: I1011 09:01:58.909606 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Oct 11 09:02:02 crc kubenswrapper[5016]: I1011 09:02:02.212615 5016 generic.go:334] "Generic (PLEG): container finished" podID="f928618b-f291-4249-a756-0636b1680e66" containerID="3258572cd236398aefaffaa860219bc466c2f0c8d40852815ec22774bc883d53" exitCode=-1 Oct 11 09:02:02 crc kubenswrapper[5016]: I1011 09:02:02.212722 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f928618b-f291-4249-a756-0636b1680e66","Type":"ContainerDied","Data":"3258572cd236398aefaffaa860219bc466c2f0c8d40852815ec22774bc883d53"} Oct 11 09:02:05 crc kubenswrapper[5016]: I1011 09:02:05.963027 5016 generic.go:334] "Generic (PLEG): container finished" podID="2b53eb06-1432-4059-9705-ffc917af76f7" containerID="c4659800ec4031246103ddac7b65f784b0f8588c3ff837fef79d267236f56ef7" exitCode=-1 Oct 11 09:02:05 crc kubenswrapper[5016]: I1011 09:02:05.963340 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2b53eb06-1432-4059-9705-ffc917af76f7","Type":"ContainerDied","Data":"c4659800ec4031246103ddac7b65f784b0f8588c3ff837fef79d267236f56ef7"} Oct 11 09:02:06 crc kubenswrapper[5016]: I1011 09:02:06.015251 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="055e76cd-8fd8-437e-a065-6d64398ce2dd" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.0.255:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:06 crc kubenswrapper[5016]: I1011 09:02:06.015402 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 11 09:02:06 crc kubenswrapper[5016]: I1011 09:02:06.017305 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-scheduler" containerStatusID={"Type":"cri-o","ID":"27f75e67fa6c886339366f8748447321e05d6b40b8efddcbd37ad5e94fc6ed29"} pod="openstack/manila-scheduler-0" containerMessage="Container manila-scheduler failed liveness probe, will be restarted" Oct 11 09:02:06 crc kubenswrapper[5016]: I1011 09:02:06.017431 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="055e76cd-8fd8-437e-a065-6d64398ce2dd" containerName="manila-scheduler" containerID="cri-o://27f75e67fa6c886339366f8748447321e05d6b40b8efddcbd37ad5e94fc6ed29" gracePeriod=30 Oct 11 09:02:07 crc kubenswrapper[5016]: I1011 09:02:07.123763 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:02:07 crc kubenswrapper[5016]: I1011 09:02:07.124285 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:02:09 crc kubenswrapper[5016]: I1011 09:02:09.916982 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:09 crc kubenswrapper[5016]: I1011 09:02:09.927087 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:09 crc kubenswrapper[5016]: I1011 09:02:09.927209 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:09 crc kubenswrapper[5016]: I1011 09:02:09.927512 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:12 crc kubenswrapper[5016]: I1011 09:02:12.209974 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="99758dd3-4691-42ed-a3eb-aead6855e030" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.0:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:12 crc kubenswrapper[5016]: I1011 09:02:12.210513 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 11 09:02:12 crc kubenswrapper[5016]: I1011 09:02:12.211978 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-share" containerStatusID={"Type":"cri-o","ID":"c5cd074a3f47182f8e0f1c7bf1ef63bf6a3b89bcfc78a53127553af2be4f1b42"} pod="openstack/manila-share-share1-0" containerMessage="Container manila-share failed liveness probe, will be restarted" Oct 11 09:02:12 crc kubenswrapper[5016]: I1011 09:02:12.212092 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="99758dd3-4691-42ed-a3eb-aead6855e030" containerName="manila-share" containerID="cri-o://c5cd074a3f47182f8e0f1c7bf1ef63bf6a3b89bcfc78a53127553af2be4f1b42" gracePeriod=30 Oct 11 09:02:23 crc kubenswrapper[5016]: I1011 09:02:23.870683 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-notification-agent" probeResult="failure" output=< Oct 11 09:02:23 crc kubenswrapper[5016]: Unkown error: Expecting value: line 1 column 1 (char 0) Oct 11 09:02:23 crc kubenswrapper[5016]: > Oct 11 09:02:23 crc kubenswrapper[5016]: I1011 09:02:23.871162 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Oct 11 09:02:27 crc kubenswrapper[5016]: I1011 09:02:27.945207 5016 generic.go:334] "Generic (PLEG): container finished" podID="99758dd3-4691-42ed-a3eb-aead6855e030" containerID="c5cd074a3f47182f8e0f1c7bf1ef63bf6a3b89bcfc78a53127553af2be4f1b42" exitCode=-1 Oct 11 09:02:27 crc kubenswrapper[5016]: I1011 09:02:27.945362 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"99758dd3-4691-42ed-a3eb-aead6855e030","Type":"ContainerDied","Data":"c5cd074a3f47182f8e0f1c7bf1ef63bf6a3b89bcfc78a53127553af2be4f1b42"} Oct 11 09:02:32 crc kubenswrapper[5016]: I1011 09:02:32.699841 5016 generic.go:334] "Generic (PLEG): container finished" podID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerID="ad0480fdb1e52e675b944bab407e3e4dd0baad19c902d27893361774d4054179" exitCode=-1 Oct 11 09:02:32 crc kubenswrapper[5016]: I1011 09:02:32.699996 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486","Type":"ContainerDied","Data":"ad0480fdb1e52e675b944bab407e3e4dd0baad19c902d27893361774d4054179"} Oct 11 09:02:35 crc kubenswrapper[5016]: I1011 09:02:35.741443 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336221-cqpgd" event={"ID":"fd20c922-e2e3-4d45-a4f7-559030c97500","Type":"ContainerStarted","Data":"f03aeb5b79c0653fd562b7e6b7214a0222f22d4f58ee5e7aab15d0808d3d88a1"} Oct 11 09:02:36 crc kubenswrapper[5016]: I1011 09:02:36.290269 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 09:02:37 crc kubenswrapper[5016]: I1011 09:02:37.123321 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:02:37 crc kubenswrapper[5016]: I1011 09:02:37.123411 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:02:38 crc kubenswrapper[5016]: I1011 09:02:38.792193 5016 generic.go:334] "Generic (PLEG): container finished" podID="055e76cd-8fd8-437e-a065-6d64398ce2dd" containerID="27f75e67fa6c886339366f8748447321e05d6b40b8efddcbd37ad5e94fc6ed29" exitCode=137 Oct 11 09:02:38 crc kubenswrapper[5016]: I1011 09:02:38.792345 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"055e76cd-8fd8-437e-a065-6d64398ce2dd","Type":"ContainerDied","Data":"27f75e67fa6c886339366f8748447321e05d6b40b8efddcbd37ad5e94fc6ed29"} Oct 11 09:02:38 crc kubenswrapper[5016]: I1011 09:02:38.824019 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29336221-cqpgd" podStartSLOduration=98.823973228 podStartE2EDuration="1m38.823973228s" podCreationTimestamp="2025-10-11 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 09:02:38.809176315 +0000 UTC m=+4946.709632281" watchObservedRunningTime="2025-10-11 09:02:38.823973228 +0000 UTC m=+4946.724429214" Oct 11 09:02:39 crc kubenswrapper[5016]: I1011 09:02:39.925988 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:39 crc kubenswrapper[5016]: I1011 09:02:39.927695 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 09:02:39 crc kubenswrapper[5016]: I1011 09:02:39.941878 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:39 crc kubenswrapper[5016]: I1011 09:02:39.941952 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:39 crc kubenswrapper[5016]: I1011 09:02:39.942014 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/nova-api-0" Oct 11 09:02:39 crc kubenswrapper[5016]: I1011 09:02:39.942299 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 09:02:39 crc kubenswrapper[5016]: I1011 09:02:39.941878 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:02:39 crc kubenswrapper[5016]: I1011 09:02:39.942443 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/nova-api-0" Oct 11 09:02:40 crc kubenswrapper[5016]: I1011 09:02:40.818330 5016 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 11 09:02:40 crc kubenswrapper[5016]: I1011 09:02:40.819783 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="nova-api-log" containerStatusID={"Type":"cri-o","ID":"e65bd739c8a20c7e34a2b7af5b0cf26836ccf13d4644d3b8b52e3ce2485521b5"} pod="openstack/nova-api-0" containerMessage="Container nova-api-log failed liveness probe, will be restarted" Oct 11 09:02:40 crc kubenswrapper[5016]: I1011 09:02:40.819842 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="nova-api-api" containerStatusID={"Type":"cri-o","ID":"e43052fe9a1ee892ca974c05057118c50daee392f5ba6342d2c9d467269349a1"} pod="openstack/nova-api-0" containerMessage="Container nova-api-api failed liveness probe, will be restarted" Oct 11 09:02:40 crc kubenswrapper[5016]: I1011 09:02:40.819891 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" containerID="cri-o://e65bd739c8a20c7e34a2b7af5b0cf26836ccf13d4644d3b8b52e3ce2485521b5" gracePeriod=30 Oct 11 09:02:41 crc kubenswrapper[5016]: I1011 09:02:41.830160 5016 generic.go:334] "Generic (PLEG): container finished" podID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerID="e65bd739c8a20c7e34a2b7af5b0cf26836ccf13d4644d3b8b52e3ce2485521b5" exitCode=143 Oct 11 09:02:41 crc kubenswrapper[5016]: I1011 09:02:41.830296 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ace03b9-7f45-49ca-ac24-3401d9820d71","Type":"ContainerDied","Data":"e65bd739c8a20c7e34a2b7af5b0cf26836ccf13d4644d3b8b52e3ce2485521b5"} Oct 11 09:02:42 crc kubenswrapper[5016]: I1011 09:02:42.049461 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" containerID="cri-o://e43052fe9a1ee892ca974c05057118c50daee392f5ba6342d2c9d467269349a1" gracePeriod=30 Oct 11 09:02:42 crc kubenswrapper[5016]: I1011 09:02:42.055403 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": EOF" Oct 11 09:02:42 crc kubenswrapper[5016]: I1011 09:02:42.055465 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": EOF" Oct 11 09:02:42 crc kubenswrapper[5016]: I1011 09:02:42.847809 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f928618b-f291-4249-a756-0636b1680e66","Type":"ContainerStarted","Data":"2a2fc459d64ae0e8c1f06b107c100b90d0eb49d1878c69157ededef8d1ff3a30"} Oct 11 09:02:42 crc kubenswrapper[5016]: I1011 09:02:42.851600 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"99758dd3-4691-42ed-a3eb-aead6855e030","Type":"ContainerStarted","Data":"d0b5a92a471b985e37ccf614ab820aba9bb8e33bf8f0b1bed0f1ba49758a09ec"} Oct 11 09:02:43 crc kubenswrapper[5016]: I1011 09:02:43.865476 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486","Type":"ContainerStarted","Data":"b06629c367034db3ad86ea4ba8165a8c668819752c1d2056efd2d67a90e5f722"} Oct 11 09:02:43 crc kubenswrapper[5016]: I1011 09:02:43.866265 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-notification-agent" containerStatusID={"Type":"cri-o","ID":"289094e484ab6e5ae816cf5b033624d61d668bdc74be62743f168916712ab17f"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-notification-agent failed liveness probe, will be restarted" Oct 11 09:02:43 crc kubenswrapper[5016]: I1011 09:02:43.866399 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-notification-agent" containerID="cri-o://289094e484ab6e5ae816cf5b033624d61d668bdc74be62743f168916712ab17f" gracePeriod=30 Oct 11 09:02:43 crc kubenswrapper[5016]: I1011 09:02:43.874374 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"055e76cd-8fd8-437e-a065-6d64398ce2dd","Type":"ContainerStarted","Data":"110de56974ad447a99533cb5f77bd536d8265eb916e7b9b32b526c55ef885080"} Oct 11 09:02:43 crc kubenswrapper[5016]: I1011 09:02:43.879586 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2b53eb06-1432-4059-9705-ffc917af76f7","Type":"ContainerStarted","Data":"b6288bc4e11423cae9a1cb73ac4e8fb35f3174d26b9ce05340df886e2fc09afa"} Oct 11 09:02:43 crc kubenswrapper[5016]: I1011 09:02:43.886097 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"14ae562e-2b57-478f-89cd-8330105eacdf","Type":"ContainerStarted","Data":"289fd3d31ca28ec6ffde3a0e4f27166545251722d4b8ef133c2b96ce3b1f801d"} Oct 11 09:02:45 crc kubenswrapper[5016]: I1011 09:02:45.612588 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 11 09:02:45 crc kubenswrapper[5016]: I1011 09:02:45.888636 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 11 09:02:47 crc kubenswrapper[5016]: I1011 09:02:47.556805 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 11 09:02:47 crc kubenswrapper[5016]: I1011 09:02:47.565474 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Oct 11 09:02:47 crc kubenswrapper[5016]: I1011 09:02:47.567262 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Oct 11 09:02:47 crc kubenswrapper[5016]: I1011 09:02:47.586539 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Oct 11 09:02:48 crc kubenswrapper[5016]: I1011 09:02:48.944488 5016 generic.go:334] "Generic (PLEG): container finished" podID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerID="289094e484ab6e5ae816cf5b033624d61d668bdc74be62743f168916712ab17f" exitCode=0 Oct 11 09:02:48 crc kubenswrapper[5016]: I1011 09:02:48.944597 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486","Type":"ContainerDied","Data":"289094e484ab6e5ae816cf5b033624d61d668bdc74be62743f168916712ab17f"} Oct 11 09:02:49 crc kubenswrapper[5016]: I1011 09:02:49.960115 5016 generic.go:334] "Generic (PLEG): container finished" podID="fd20c922-e2e3-4d45-a4f7-559030c97500" containerID="f03aeb5b79c0653fd562b7e6b7214a0222f22d4f58ee5e7aab15d0808d3d88a1" exitCode=0 Oct 11 09:02:49 crc kubenswrapper[5016]: I1011 09:02:49.960192 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336221-cqpgd" event={"ID":"fd20c922-e2e3-4d45-a4f7-559030c97500","Type":"ContainerDied","Data":"f03aeb5b79c0653fd562b7e6b7214a0222f22d4f58ee5e7aab15d0808d3d88a1"} Oct 11 09:02:50 crc kubenswrapper[5016]: I1011 09:02:50.621381 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.474258 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.513375 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": read tcp 10.217.0.2:46330->10.217.0.191:8774: read: connection reset by peer" Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.513893 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": read tcp 10.217.0.2:46322->10.217.0.191:8774: read: connection reset by peer" Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.597153 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-fernet-keys\") pod \"fd20c922-e2e3-4d45-a4f7-559030c97500\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.597288 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-config-data\") pod \"fd20c922-e2e3-4d45-a4f7-559030c97500\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.597354 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-combined-ca-bundle\") pod \"fd20c922-e2e3-4d45-a4f7-559030c97500\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.597407 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzkv\" (UniqueName: \"kubernetes.io/projected/fd20c922-e2e3-4d45-a4f7-559030c97500-kube-api-access-ndzkv\") pod \"fd20c922-e2e3-4d45-a4f7-559030c97500\" (UID: \"fd20c922-e2e3-4d45-a4f7-559030c97500\") " Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.608994 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fd20c922-e2e3-4d45-a4f7-559030c97500" (UID: "fd20c922-e2e3-4d45-a4f7-559030c97500"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.609182 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd20c922-e2e3-4d45-a4f7-559030c97500-kube-api-access-ndzkv" (OuterVolumeSpecName: "kube-api-access-ndzkv") pod "fd20c922-e2e3-4d45-a4f7-559030c97500" (UID: "fd20c922-e2e3-4d45-a4f7-559030c97500"). InnerVolumeSpecName "kube-api-access-ndzkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.702762 5016 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.707787 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzkv\" (UniqueName: \"kubernetes.io/projected/fd20c922-e2e3-4d45-a4f7-559030c97500-kube-api-access-ndzkv\") on node \"crc\" DevicePath \"\"" Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.988530 5016 generic.go:334] "Generic (PLEG): container finished" podID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerID="e43052fe9a1ee892ca974c05057118c50daee392f5ba6342d2c9d467269349a1" exitCode=0 Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.988649 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ace03b9-7f45-49ca-ac24-3401d9820d71","Type":"ContainerDied","Data":"e43052fe9a1ee892ca974c05057118c50daee392f5ba6342d2c9d467269349a1"} Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.991135 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336221-cqpgd" event={"ID":"fd20c922-e2e3-4d45-a4f7-559030c97500","Type":"ContainerDied","Data":"75f1c1ec7d35cff9fd2a0b6323d5998c843627c91804ccf3fee62d677a5a94ad"} Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.991196 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f1c1ec7d35cff9fd2a0b6323d5998c843627c91804ccf3fee62d677a5a94ad" Oct 11 09:02:51 crc kubenswrapper[5016]: I1011 09:02:51.991225 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336221-cqpgd" Oct 11 09:02:52 crc kubenswrapper[5016]: I1011 09:02:52.089261 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 11 09:02:52 crc kubenswrapper[5016]: I1011 09:02:52.333144 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd20c922-e2e3-4d45-a4f7-559030c97500" (UID: "fd20c922-e2e3-4d45-a4f7-559030c97500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:02:52 crc kubenswrapper[5016]: I1011 09:02:52.337680 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 09:02:52 crc kubenswrapper[5016]: I1011 09:02:52.381140 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-config-data" (OuterVolumeSpecName: "config-data") pod "fd20c922-e2e3-4d45-a4f7-559030c97500" (UID: "fd20c922-e2e3-4d45-a4f7-559030c97500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:02:52 crc kubenswrapper[5016]: I1011 09:02:52.440478 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd20c922-e2e3-4d45-a4f7-559030c97500-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 09:02:53 crc kubenswrapper[5016]: I1011 09:02:53.688030 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 11 09:02:57 crc kubenswrapper[5016]: I1011 09:02:57.080485 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ace03b9-7f45-49ca-ac24-3401d9820d71","Type":"ContainerStarted","Data":"133c19c229f4e97e8849ad983768f7fb390c55725e010a9b67b160d53167e725"} Oct 11 09:02:57 crc kubenswrapper[5016]: I1011 09:02:57.542149 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 11 09:03:00 crc kubenswrapper[5016]: I1011 09:03:00.119767 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9ace03b9-7f45-49ca-ac24-3401d9820d71","Type":"ContainerStarted","Data":"ab64ccd400aee220d742c63524ad8fb6e7365a2c619c35ce38a0c619a21fe8ad"} Oct 11 09:03:00 crc kubenswrapper[5016]: I1011 09:03:00.124691 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae7b0f07-6360-46c1-8bc1-f89c5ac7a486","Type":"ContainerStarted","Data":"9d6fdac911db26f9d57148e94cc72dca04a05f7e474603c510ccb7aeb78e9aa9"} Oct 11 09:03:02 crc kubenswrapper[5016]: I1011 09:03:02.994696 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svn"] Oct 11 09:03:02 crc kubenswrapper[5016]: E1011 09:03:02.996050 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd20c922-e2e3-4d45-a4f7-559030c97500" containerName="keystone-cron" Oct 11 09:03:02 crc kubenswrapper[5016]: I1011 09:03:02.996455 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd20c922-e2e3-4d45-a4f7-559030c97500" containerName="keystone-cron" Oct 11 09:03:02 crc kubenswrapper[5016]: I1011 09:03:02.996968 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd20c922-e2e3-4d45-a4f7-559030c97500" containerName="keystone-cron" Oct 11 09:03:02 crc kubenswrapper[5016]: I1011 09:03:02.999820 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.018075 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svn"] Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.049797 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdtb\" (UniqueName: \"kubernetes.io/projected/80cdf98b-df44-4cf9-ba78-1daf0e304527-kube-api-access-frdtb\") pod \"redhat-marketplace-z5svn\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.049921 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-catalog-content\") pod \"redhat-marketplace-z5svn\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.049979 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-utilities\") pod \"redhat-marketplace-z5svn\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.151827 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-catalog-content\") pod \"redhat-marketplace-z5svn\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.151911 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-utilities\") pod \"redhat-marketplace-z5svn\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.152142 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frdtb\" (UniqueName: \"kubernetes.io/projected/80cdf98b-df44-4cf9-ba78-1daf0e304527-kube-api-access-frdtb\") pod \"redhat-marketplace-z5svn\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.152579 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-catalog-content\") pod \"redhat-marketplace-z5svn\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.153746 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-utilities\") pod \"redhat-marketplace-z5svn\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.192109 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frdtb\" (UniqueName: \"kubernetes.io/projected/80cdf98b-df44-4cf9-ba78-1daf0e304527-kube-api-access-frdtb\") pod \"redhat-marketplace-z5svn\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.348461 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:03 crc kubenswrapper[5016]: W1011 09:03:03.883711 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80cdf98b_df44_4cf9_ba78_1daf0e304527.slice/crio-ca68e9a0e456251e86d973a176d3ea7ed868cd001b8dee73f141f1f8f6b8d607 WatchSource:0}: Error finding container ca68e9a0e456251e86d973a176d3ea7ed868cd001b8dee73f141f1f8f6b8d607: Status 404 returned error can't find the container with id ca68e9a0e456251e86d973a176d3ea7ed868cd001b8dee73f141f1f8f6b8d607 Oct 11 09:03:03 crc kubenswrapper[5016]: I1011 09:03:03.887049 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svn"] Oct 11 09:03:04 crc kubenswrapper[5016]: I1011 09:03:04.190347 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svn" event={"ID":"80cdf98b-df44-4cf9-ba78-1daf0e304527","Type":"ContainerStarted","Data":"ca68e9a0e456251e86d973a176d3ea7ed868cd001b8dee73f141f1f8f6b8d607"} Oct 11 09:03:05 crc kubenswrapper[5016]: I1011 09:03:05.207452 5016 generic.go:334] "Generic (PLEG): container finished" podID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerID="3c014782cda7d1cbc8c077515a59df914b022e92a1001a629a3c1bc43e45384e" exitCode=0 Oct 11 09:03:05 crc kubenswrapper[5016]: I1011 09:03:05.207594 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svn" event={"ID":"80cdf98b-df44-4cf9-ba78-1daf0e304527","Type":"ContainerDied","Data":"3c014782cda7d1cbc8c077515a59df914b022e92a1001a629a3c1bc43e45384e"} Oct 11 09:03:07 crc kubenswrapper[5016]: I1011 09:03:07.122677 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:03:07 crc kubenswrapper[5016]: I1011 09:03:07.123196 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:03:07 crc kubenswrapper[5016]: I1011 09:03:07.123263 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:03:07 crc kubenswrapper[5016]: I1011 09:03:07.124184 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:03:07 crc kubenswrapper[5016]: I1011 09:03:07.124287 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" gracePeriod=600 Oct 11 09:03:08 crc kubenswrapper[5016]: E1011 09:03:08.168989 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:03:08 crc kubenswrapper[5016]: I1011 09:03:08.263463 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" exitCode=0 Oct 11 09:03:08 crc kubenswrapper[5016]: I1011 09:03:08.263566 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955"} Oct 11 09:03:08 crc kubenswrapper[5016]: I1011 09:03:08.263697 5016 scope.go:117] "RemoveContainer" containerID="8a5d6c46f82632e3a479f2652b8d0209efc03409024ba6cc0d55fd1a401195d3" Oct 11 09:03:08 crc kubenswrapper[5016]: I1011 09:03:08.264916 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:03:08 crc kubenswrapper[5016]: E1011 09:03:08.265485 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:03:09 crc kubenswrapper[5016]: I1011 09:03:09.863111 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 09:03:09 crc kubenswrapper[5016]: I1011 09:03:09.863901 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 11 09:03:09 crc kubenswrapper[5016]: I1011 09:03:09.863940 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 09:03:09 crc kubenswrapper[5016]: I1011 09:03:09.863965 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 11 09:03:10 crc kubenswrapper[5016]: I1011 09:03:10.883943 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:10 crc kubenswrapper[5016]: I1011 09:03:10.884013 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9ace03b9-7f45-49ca-ac24-3401d9820d71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:13 crc kubenswrapper[5016]: I1011 09:03:13.346996 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svn" event={"ID":"80cdf98b-df44-4cf9-ba78-1daf0e304527","Type":"ContainerStarted","Data":"6c64f86b8e0bdc749fabd3d18c0f3e472206b24529d33e13b5fffd6fa66e4660"} Oct 11 09:03:14 crc kubenswrapper[5016]: I1011 09:03:14.364207 5016 generic.go:334] "Generic (PLEG): container finished" podID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerID="6c64f86b8e0bdc749fabd3d18c0f3e472206b24529d33e13b5fffd6fa66e4660" exitCode=0 Oct 11 09:03:14 crc kubenswrapper[5016]: I1011 09:03:14.364353 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svn" event={"ID":"80cdf98b-df44-4cf9-ba78-1daf0e304527","Type":"ContainerDied","Data":"6c64f86b8e0bdc749fabd3d18c0f3e472206b24529d33e13b5fffd6fa66e4660"} Oct 11 09:03:19 crc kubenswrapper[5016]: I1011 09:03:19.868768 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 09:03:19 crc kubenswrapper[5016]: I1011 09:03:19.871813 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 11 09:03:19 crc kubenswrapper[5016]: I1011 09:03:19.911225 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 09:03:19 crc kubenswrapper[5016]: I1011 09:03:19.918178 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 11 09:03:21 crc kubenswrapper[5016]: I1011 09:03:21.133919 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:03:21 crc kubenswrapper[5016]: E1011 09:03:21.134871 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:03:30 crc kubenswrapper[5016]: I1011 09:03:30.578827 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svn" event={"ID":"80cdf98b-df44-4cf9-ba78-1daf0e304527","Type":"ContainerStarted","Data":"65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0"} Oct 11 09:03:31 crc kubenswrapper[5016]: I1011 09:03:31.621780 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z5svn" podStartSLOduration=7.886178783 podStartE2EDuration="29.621755727s" podCreationTimestamp="2025-10-11 09:03:02 +0000 UTC" firstStartedPulling="2025-10-11 09:03:05.211187383 +0000 UTC m=+4973.111643359" lastFinishedPulling="2025-10-11 09:03:26.946764357 +0000 UTC m=+4994.847220303" observedRunningTime="2025-10-11 09:03:31.612688297 +0000 UTC m=+4999.513144263" watchObservedRunningTime="2025-10-11 09:03:31.621755727 +0000 UTC m=+4999.522211683" Oct 11 09:03:33 crc kubenswrapper[5016]: I1011 09:03:33.348627 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:33 crc kubenswrapper[5016]: I1011 09:03:33.349608 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:33 crc kubenswrapper[5016]: I1011 09:03:33.425152 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:35 crc kubenswrapper[5016]: I1011 09:03:35.134718 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:03:35 crc kubenswrapper[5016]: E1011 09:03:35.136237 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:03:39 crc kubenswrapper[5016]: I1011 09:03:39.694081 5016 patch_prober.go:28] interesting pod/controller-manager-694d48b6db-7db5n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 11 09:03:39 crc kubenswrapper[5016]: I1011 09:03:39.695133 5016 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-694d48b6db-7db5n" podUID="154a3a8e-2384-4300-88ff-7f04ed9d2f25" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:44 crc kubenswrapper[5016]: I1011 09:03:44.283635 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:03:44 crc kubenswrapper[5016]: I1011 09:03:44.350853 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svn"] Oct 11 09:03:44 crc kubenswrapper[5016]: I1011 09:03:44.793527 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z5svn" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="registry-server" containerID="cri-o://65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" gracePeriod=2 Oct 11 09:03:48 crc kubenswrapper[5016]: I1011 09:03:48.135146 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:03:48 crc kubenswrapper[5016]: E1011 09:03:48.136559 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:03:49 crc kubenswrapper[5016]: I1011 09:03:49.640070 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:49 crc kubenswrapper[5016]: I1011 09:03:49.640110 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.241:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:51 crc kubenswrapper[5016]: I1011 09:03:51.655145 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:53 crc kubenswrapper[5016]: E1011 09:03:53.350584 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:03:53 crc kubenswrapper[5016]: E1011 09:03:53.353226 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:03:53 crc kubenswrapper[5016]: E1011 09:03:53.354097 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:03:53 crc kubenswrapper[5016]: E1011 09:03:53.354285 5016 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-z5svn" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="registry-server" Oct 11 09:03:54 crc kubenswrapper[5016]: I1011 09:03:54.724024 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.241:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:54 crc kubenswrapper[5016]: I1011 09:03:54.724060 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:55 crc kubenswrapper[5016]: I1011 09:03:55.650011 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z5svn_80cdf98b-df44-4cf9-ba78-1daf0e304527/registry-server/0.log" Oct 11 09:03:55 crc kubenswrapper[5016]: I1011 09:03:55.651910 5016 generic.go:334] "Generic (PLEG): container finished" podID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" exitCode=-1 Oct 11 09:03:55 crc kubenswrapper[5016]: I1011 09:03:55.651976 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svn" event={"ID":"80cdf98b-df44-4cf9-ba78-1daf0e304527","Type":"ContainerDied","Data":"65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0"} Oct 11 09:03:56 crc kubenswrapper[5016]: I1011 09:03:56.698032 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:59 crc kubenswrapper[5016]: I1011 09:03:59.806936 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.0.240:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:59 crc kubenswrapper[5016]: I1011 09:03:59.807429 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 11 09:03:59 crc kubenswrapper[5016]: I1011 09:03:59.806939 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.217.0.241:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:03:59 crc kubenswrapper[5016]: I1011 09:03:59.808830 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-backup-0" Oct 11 09:03:59 crc kubenswrapper[5016]: I1011 09:03:59.809017 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-volume" containerStatusID={"Type":"cri-o","ID":"2a2fc459d64ae0e8c1f06b107c100b90d0eb49d1878c69157ededef8d1ff3a30"} pod="openstack/cinder-volume-volume1-0" containerMessage="Container cinder-volume failed liveness probe, will be restarted" Oct 11 09:03:59 crc kubenswrapper[5016]: I1011 09:03:59.809089 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="f928618b-f291-4249-a756-0636b1680e66" containerName="cinder-volume" containerID="cri-o://2a2fc459d64ae0e8c1f06b107c100b90d0eb49d1878c69157ededef8d1ff3a30" gracePeriod=30 Oct 11 09:03:59 crc kubenswrapper[5016]: I1011 09:03:59.811002 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-backup" containerStatusID={"Type":"cri-o","ID":"b6288bc4e11423cae9a1cb73ac4e8fb35f3174d26b9ce05340df886e2fc09afa"} pod="openstack/cinder-backup-0" containerMessage="Container cinder-backup failed liveness probe, will be restarted" Oct 11 09:03:59 crc kubenswrapper[5016]: I1011 09:03:59.811106 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="2b53eb06-1432-4059-9705-ffc917af76f7" containerName="cinder-backup" containerID="cri-o://b6288bc4e11423cae9a1cb73ac4e8fb35f3174d26b9ce05340df886e2fc09afa" gracePeriod=30 Oct 11 09:04:01 crc kubenswrapper[5016]: I1011 09:04:01.740010 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:04:01 crc kubenswrapper[5016]: I1011 09:04:01.740183 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 11 09:04:01 crc kubenswrapper[5016]: I1011 09:04:01.742009 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"289fd3d31ca28ec6ffde3a0e4f27166545251722d4b8ef133c2b96ce3b1f801d"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Oct 11 09:04:01 crc kubenswrapper[5016]: I1011 09:04:01.742155 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" containerID="cri-o://289fd3d31ca28ec6ffde3a0e4f27166545251722d4b8ef133c2b96ce3b1f801d" gracePeriod=30 Oct 11 09:04:02 crc kubenswrapper[5016]: I1011 09:04:02.134988 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:04:02 crc kubenswrapper[5016]: E1011 09:04:02.136545 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:04:03 crc kubenswrapper[5016]: E1011 09:04:03.351143 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:04:03 crc kubenswrapper[5016]: E1011 09:04:03.354109 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:04:03 crc kubenswrapper[5016]: E1011 09:04:03.355001 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:04:03 crc kubenswrapper[5016]: E1011 09:04:03.355213 5016 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-z5svn" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="registry-server" Oct 11 09:04:05 crc kubenswrapper[5016]: I1011 09:04:05.929951 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="055e76cd-8fd8-437e-a065-6d64398ce2dd" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.0.255:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:04:12 crc kubenswrapper[5016]: I1011 09:04:12.120951 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="99758dd3-4691-42ed-a3eb-aead6855e030" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.0:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:04:13 crc kubenswrapper[5016]: E1011 09:04:13.350721 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:04:13 crc kubenswrapper[5016]: E1011 09:04:13.352619 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:04:13 crc kubenswrapper[5016]: E1011 09:04:13.353446 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:04:13 crc kubenswrapper[5016]: E1011 09:04:13.355849 5016 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-z5svn" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="registry-server" Oct 11 09:04:16 crc kubenswrapper[5016]: I1011 09:04:16.133999 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:04:16 crc kubenswrapper[5016]: E1011 09:04:16.134863 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.558836 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z5svn_80cdf98b-df44-4cf9-ba78-1daf0e304527/registry-server/0.log" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.560842 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.578176 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-catalog-content\") pod \"80cdf98b-df44-4cf9-ba78-1daf0e304527\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.578278 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frdtb\" (UniqueName: \"kubernetes.io/projected/80cdf98b-df44-4cf9-ba78-1daf0e304527-kube-api-access-frdtb\") pod \"80cdf98b-df44-4cf9-ba78-1daf0e304527\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.578521 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-utilities\") pod \"80cdf98b-df44-4cf9-ba78-1daf0e304527\" (UID: \"80cdf98b-df44-4cf9-ba78-1daf0e304527\") " Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.579761 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-utilities" (OuterVolumeSpecName: "utilities") pod "80cdf98b-df44-4cf9-ba78-1daf0e304527" (UID: "80cdf98b-df44-4cf9-ba78-1daf0e304527"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.580692 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.643998 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cdf98b-df44-4cf9-ba78-1daf0e304527-kube-api-access-frdtb" (OuterVolumeSpecName: "kube-api-access-frdtb") pod "80cdf98b-df44-4cf9-ba78-1daf0e304527" (UID: "80cdf98b-df44-4cf9-ba78-1daf0e304527"). InnerVolumeSpecName "kube-api-access-frdtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.683405 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frdtb\" (UniqueName: \"kubernetes.io/projected/80cdf98b-df44-4cf9-ba78-1daf0e304527-kube-api-access-frdtb\") on node \"crc\" DevicePath \"\"" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.692021 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80cdf98b-df44-4cf9-ba78-1daf0e304527" (UID: "80cdf98b-df44-4cf9-ba78-1daf0e304527"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.786440 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cdf98b-df44-4cf9-ba78-1daf0e304527-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.998577 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-z5svn_80cdf98b-df44-4cf9-ba78-1daf0e304527/registry-server/0.log" Oct 11 09:04:17 crc kubenswrapper[5016]: I1011 09:04:17.999926 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5svn" event={"ID":"80cdf98b-df44-4cf9-ba78-1daf0e304527","Type":"ContainerDied","Data":"ca68e9a0e456251e86d973a176d3ea7ed868cd001b8dee73f141f1f8f6b8d607"} Oct 11 09:04:18 crc kubenswrapper[5016]: I1011 09:04:17.999998 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5svn" Oct 11 09:04:18 crc kubenswrapper[5016]: I1011 09:04:18.000167 5016 scope.go:117] "RemoveContainer" containerID="65b3c8a961a2683c601fa043bd4972f34842257ba228f929be9061ad869ffaf0" Oct 11 09:04:18 crc kubenswrapper[5016]: I1011 09:04:18.043631 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svn"] Oct 11 09:04:18 crc kubenswrapper[5016]: I1011 09:04:18.048487 5016 scope.go:117] "RemoveContainer" containerID="6c64f86b8e0bdc749fabd3d18c0f3e472206b24529d33e13b5fffd6fa66e4660" Oct 11 09:04:18 crc kubenswrapper[5016]: I1011 09:04:18.054555 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5svn"] Oct 11 09:04:18 crc kubenswrapper[5016]: I1011 09:04:18.077199 5016 scope.go:117] "RemoveContainer" containerID="3c014782cda7d1cbc8c077515a59df914b022e92a1001a629a3c1bc43e45384e" Oct 11 09:04:19 crc kubenswrapper[5016]: I1011 09:04:19.014457 5016 generic.go:334] "Generic (PLEG): container finished" podID="14ae562e-2b57-478f-89cd-8330105eacdf" containerID="289fd3d31ca28ec6ffde3a0e4f27166545251722d4b8ef133c2b96ce3b1f801d" exitCode=0 Oct 11 09:04:19 crc kubenswrapper[5016]: I1011 09:04:19.017369 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"14ae562e-2b57-478f-89cd-8330105eacdf","Type":"ContainerDied","Data":"289fd3d31ca28ec6ffde3a0e4f27166545251722d4b8ef133c2b96ce3b1f801d"} Oct 11 09:04:19 crc kubenswrapper[5016]: I1011 09:04:19.017569 5016 scope.go:117] "RemoveContainer" containerID="e0835f74268868e54c884e3f7f5f6ec3663ed39fc88e0a94c4f71b6ed100fa6f" Oct 11 09:04:19 crc kubenswrapper[5016]: I1011 09:04:19.154037 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" path="/var/lib/kubelet/pods/80cdf98b-df44-4cf9-ba78-1daf0e304527/volumes" Oct 11 09:04:20 crc kubenswrapper[5016]: I1011 09:04:20.031473 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"14ae562e-2b57-478f-89cd-8330105eacdf","Type":"ContainerStarted","Data":"45e0b94f65ede2ae59c5119a3a2fa0aabd207057d0f2191fa47136fef6a2d5ae"} Oct 11 09:04:20 crc kubenswrapper[5016]: I1011 09:04:20.034920 5016 generic.go:334] "Generic (PLEG): container finished" podID="f928618b-f291-4249-a756-0636b1680e66" containerID="2a2fc459d64ae0e8c1f06b107c100b90d0eb49d1878c69157ededef8d1ff3a30" exitCode=0 Oct 11 09:04:20 crc kubenswrapper[5016]: I1011 09:04:20.034960 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f928618b-f291-4249-a756-0636b1680e66","Type":"ContainerDied","Data":"2a2fc459d64ae0e8c1f06b107c100b90d0eb49d1878c69157ededef8d1ff3a30"} Oct 11 09:04:20 crc kubenswrapper[5016]: I1011 09:04:20.034996 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"f928618b-f291-4249-a756-0636b1680e66","Type":"ContainerStarted","Data":"8355b2c611d89ce883214bfb97198eb491d693861415e36128a067c0e723e9dd"} Oct 11 09:04:20 crc kubenswrapper[5016]: I1011 09:04:20.035022 5016 scope.go:117] "RemoveContainer" containerID="3258572cd236398aefaffaa860219bc466c2f0c8d40852815ec22774bc883d53" Oct 11 09:04:21 crc kubenswrapper[5016]: I1011 09:04:21.052037 5016 generic.go:334] "Generic (PLEG): container finished" podID="2b53eb06-1432-4059-9705-ffc917af76f7" containerID="b6288bc4e11423cae9a1cb73ac4e8fb35f3174d26b9ce05340df886e2fc09afa" exitCode=0 Oct 11 09:04:21 crc kubenswrapper[5016]: I1011 09:04:21.052094 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2b53eb06-1432-4059-9705-ffc917af76f7","Type":"ContainerDied","Data":"b6288bc4e11423cae9a1cb73ac4e8fb35f3174d26b9ce05340df886e2fc09afa"} Oct 11 09:04:21 crc kubenswrapper[5016]: I1011 09:04:21.054402 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"2b53eb06-1432-4059-9705-ffc917af76f7","Type":"ContainerStarted","Data":"81f9b4c59f9ff1a6abf375f08c457f1d5e161eada78f3834800b1b5458911c7f"} Oct 11 09:04:21 crc kubenswrapper[5016]: I1011 09:04:21.054456 5016 scope.go:117] "RemoveContainer" containerID="c4659800ec4031246103ddac7b65f784b0f8588c3ff837fef79d267236f56ef7" Oct 11 09:04:22 crc kubenswrapper[5016]: I1011 09:04:22.556474 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 11 09:04:22 crc kubenswrapper[5016]: I1011 09:04:22.566221 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Oct 11 09:04:25 crc kubenswrapper[5016]: I1011 09:04:25.612519 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 11 09:04:25 crc kubenswrapper[5016]: I1011 09:04:25.623032 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 11 09:04:27 crc kubenswrapper[5016]: I1011 09:04:27.567826 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Oct 11 09:04:27 crc kubenswrapper[5016]: I1011 09:04:27.591464 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Oct 11 09:04:31 crc kubenswrapper[5016]: I1011 09:04:31.134761 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:04:31 crc kubenswrapper[5016]: E1011 09:04:31.135121 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:04:45 crc kubenswrapper[5016]: I1011 09:04:45.133265 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:04:45 crc kubenswrapper[5016]: E1011 09:04:45.134279 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:05:00 crc kubenswrapper[5016]: I1011 09:05:00.134019 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:05:00 crc kubenswrapper[5016]: E1011 09:05:00.136980 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:05:13 crc kubenswrapper[5016]: I1011 09:05:13.147007 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:05:13 crc kubenswrapper[5016]: E1011 09:05:13.148103 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:05:24 crc kubenswrapper[5016]: I1011 09:05:24.134423 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:05:24 crc kubenswrapper[5016]: E1011 09:05:24.135847 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:05:36 crc kubenswrapper[5016]: I1011 09:05:36.134011 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:05:36 crc kubenswrapper[5016]: E1011 09:05:36.134873 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:05:48 crc kubenswrapper[5016]: I1011 09:05:48.133748 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:05:48 crc kubenswrapper[5016]: E1011 09:05:48.134920 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:06:00 crc kubenswrapper[5016]: I1011 09:06:00.134450 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:06:00 crc kubenswrapper[5016]: E1011 09:06:00.135899 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:06:13 crc kubenswrapper[5016]: I1011 09:06:13.134061 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:06:13 crc kubenswrapper[5016]: E1011 09:06:13.136280 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:06:28 crc kubenswrapper[5016]: I1011 09:06:28.135175 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:06:28 crc kubenswrapper[5016]: E1011 09:06:28.136549 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:06:43 crc kubenswrapper[5016]: I1011 09:06:43.139486 5016 scope.go:117] "RemoveContainer" containerID="cc810d935bdee7a9900661884f6f55e77383b7169f2eedd92407e6e06f6d6f07" Oct 11 09:06:43 crc kubenswrapper[5016]: I1011 09:06:43.144508 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:06:43 crc kubenswrapper[5016]: E1011 09:06:43.144834 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:06:43 crc kubenswrapper[5016]: I1011 09:06:43.174978 5016 scope.go:117] "RemoveContainer" containerID="4327c51179e4c9e56fa8adae2389cdc2f591e79bc811f8f8850230720387431a" Oct 11 09:06:43 crc kubenswrapper[5016]: I1011 09:06:43.224472 5016 scope.go:117] "RemoveContainer" containerID="088bc4837a4ab30a9fc5c593d42c9a875861ef6c5e7a5cac407181abfa408365" Oct 11 09:06:57 crc kubenswrapper[5016]: I1011 09:06:57.138872 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:06:57 crc kubenswrapper[5016]: E1011 09:06:57.139752 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:07:08 crc kubenswrapper[5016]: I1011 09:07:08.133896 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:07:08 crc kubenswrapper[5016]: E1011 09:07:08.135165 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:07:20 crc kubenswrapper[5016]: I1011 09:07:20.134044 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:07:20 crc kubenswrapper[5016]: E1011 09:07:20.135529 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:07:35 crc kubenswrapper[5016]: I1011 09:07:35.138125 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:07:35 crc kubenswrapper[5016]: E1011 09:07:35.141630 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:07:49 crc kubenswrapper[5016]: I1011 09:07:49.134455 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:07:49 crc kubenswrapper[5016]: E1011 09:07:49.135634 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:08:01 crc kubenswrapper[5016]: I1011 09:08:01.134316 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:08:01 crc kubenswrapper[5016]: E1011 09:08:01.136344 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:08:12 crc kubenswrapper[5016]: I1011 09:08:12.133217 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:08:12 crc kubenswrapper[5016]: I1011 09:08:12.875780 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"fc778842f389fa8f3b854900cc0858ca0e3a8880cb30651243c8816ce3908738"} Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.269926 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pxm4z"] Oct 11 09:10:32 crc kubenswrapper[5016]: E1011 09:10:32.271051 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="registry-server" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.271064 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="registry-server" Oct 11 09:10:32 crc kubenswrapper[5016]: E1011 09:10:32.271090 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="extract-content" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.271097 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="extract-content" Oct 11 09:10:32 crc kubenswrapper[5016]: E1011 09:10:32.271123 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="extract-utilities" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.271130 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="extract-utilities" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.271338 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cdf98b-df44-4cf9-ba78-1daf0e304527" containerName="registry-server" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.272849 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.300472 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxm4z"] Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.499162 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-catalog-content\") pod \"community-operators-pxm4z\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.499469 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8hzz\" (UniqueName: \"kubernetes.io/projected/7abae00d-21df-40ad-a465-559228f123e2-kube-api-access-g8hzz\") pod \"community-operators-pxm4z\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.499628 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-utilities\") pod \"community-operators-pxm4z\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.605587 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-utilities\") pod \"community-operators-pxm4z\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.605744 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-catalog-content\") pod \"community-operators-pxm4z\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.605846 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8hzz\" (UniqueName: \"kubernetes.io/projected/7abae00d-21df-40ad-a465-559228f123e2-kube-api-access-g8hzz\") pod \"community-operators-pxm4z\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.606391 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-utilities\") pod \"community-operators-pxm4z\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.606668 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-catalog-content\") pod \"community-operators-pxm4z\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.629717 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8hzz\" (UniqueName: \"kubernetes.io/projected/7abae00d-21df-40ad-a465-559228f123e2-kube-api-access-g8hzz\") pod \"community-operators-pxm4z\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:32 crc kubenswrapper[5016]: I1011 09:10:32.902505 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:33 crc kubenswrapper[5016]: I1011 09:10:33.539371 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pxm4z"] Oct 11 09:10:34 crc kubenswrapper[5016]: I1011 09:10:34.538539 5016 generic.go:334] "Generic (PLEG): container finished" podID="7abae00d-21df-40ad-a465-559228f123e2" containerID="29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084" exitCode=0 Oct 11 09:10:34 crc kubenswrapper[5016]: I1011 09:10:34.538611 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxm4z" event={"ID":"7abae00d-21df-40ad-a465-559228f123e2","Type":"ContainerDied","Data":"29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084"} Oct 11 09:10:34 crc kubenswrapper[5016]: I1011 09:10:34.539450 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxm4z" event={"ID":"7abae00d-21df-40ad-a465-559228f123e2","Type":"ContainerStarted","Data":"90133551aea9d213ac3bb107996365234fa79144ee52efe73c97cb51de478ae0"} Oct 11 09:10:34 crc kubenswrapper[5016]: I1011 09:10:34.543455 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 09:10:35 crc kubenswrapper[5016]: I1011 09:10:35.552556 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxm4z" event={"ID":"7abae00d-21df-40ad-a465-559228f123e2","Type":"ContainerStarted","Data":"97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8"} Oct 11 09:10:37 crc kubenswrapper[5016]: I1011 09:10:37.123339 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:10:37 crc kubenswrapper[5016]: I1011 09:10:37.125174 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:10:38 crc kubenswrapper[5016]: I1011 09:10:38.581341 5016 generic.go:334] "Generic (PLEG): container finished" podID="7abae00d-21df-40ad-a465-559228f123e2" containerID="97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8" exitCode=0 Oct 11 09:10:38 crc kubenswrapper[5016]: I1011 09:10:38.581421 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxm4z" event={"ID":"7abae00d-21df-40ad-a465-559228f123e2","Type":"ContainerDied","Data":"97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8"} Oct 11 09:10:39 crc kubenswrapper[5016]: I1011 09:10:39.596004 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxm4z" event={"ID":"7abae00d-21df-40ad-a465-559228f123e2","Type":"ContainerStarted","Data":"98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0"} Oct 11 09:10:39 crc kubenswrapper[5016]: I1011 09:10:39.623001 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pxm4z" podStartSLOduration=3.196021315 podStartE2EDuration="7.622982896s" podCreationTimestamp="2025-10-11 09:10:32 +0000 UTC" firstStartedPulling="2025-10-11 09:10:34.543182906 +0000 UTC m=+5422.443638852" lastFinishedPulling="2025-10-11 09:10:38.970144487 +0000 UTC m=+5426.870600433" observedRunningTime="2025-10-11 09:10:39.617151951 +0000 UTC m=+5427.517607907" watchObservedRunningTime="2025-10-11 09:10:39.622982896 +0000 UTC m=+5427.523438842" Oct 11 09:10:42 crc kubenswrapper[5016]: I1011 09:10:42.903705 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:42 crc kubenswrapper[5016]: I1011 09:10:42.904085 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:42 crc kubenswrapper[5016]: I1011 09:10:42.980131 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:52 crc kubenswrapper[5016]: I1011 09:10:52.989264 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:53 crc kubenswrapper[5016]: I1011 09:10:53.073940 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxm4z"] Oct 11 09:10:53 crc kubenswrapper[5016]: I1011 09:10:53.762302 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pxm4z" podUID="7abae00d-21df-40ad-a465-559228f123e2" containerName="registry-server" containerID="cri-o://98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0" gracePeriod=2 Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.442709 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.513288 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-utilities\") pod \"7abae00d-21df-40ad-a465-559228f123e2\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.513371 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8hzz\" (UniqueName: \"kubernetes.io/projected/7abae00d-21df-40ad-a465-559228f123e2-kube-api-access-g8hzz\") pod \"7abae00d-21df-40ad-a465-559228f123e2\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.515126 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-catalog-content\") pod \"7abae00d-21df-40ad-a465-559228f123e2\" (UID: \"7abae00d-21df-40ad-a465-559228f123e2\") " Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.515750 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-utilities" (OuterVolumeSpecName: "utilities") pod "7abae00d-21df-40ad-a465-559228f123e2" (UID: "7abae00d-21df-40ad-a465-559228f123e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.516528 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.525053 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7abae00d-21df-40ad-a465-559228f123e2-kube-api-access-g8hzz" (OuterVolumeSpecName: "kube-api-access-g8hzz") pod "7abae00d-21df-40ad-a465-559228f123e2" (UID: "7abae00d-21df-40ad-a465-559228f123e2"). InnerVolumeSpecName "kube-api-access-g8hzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.574051 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7abae00d-21df-40ad-a465-559228f123e2" (UID: "7abae00d-21df-40ad-a465-559228f123e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.619789 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8hzz\" (UniqueName: \"kubernetes.io/projected/7abae00d-21df-40ad-a465-559228f123e2-kube-api-access-g8hzz\") on node \"crc\" DevicePath \"\"" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.619857 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7abae00d-21df-40ad-a465-559228f123e2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.654863 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qxv9s"] Oct 11 09:10:54 crc kubenswrapper[5016]: E1011 09:10:54.655372 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7abae00d-21df-40ad-a465-559228f123e2" containerName="extract-utilities" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.655395 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="7abae00d-21df-40ad-a465-559228f123e2" containerName="extract-utilities" Oct 11 09:10:54 crc kubenswrapper[5016]: E1011 09:10:54.655432 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7abae00d-21df-40ad-a465-559228f123e2" containerName="extract-content" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.655442 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="7abae00d-21df-40ad-a465-559228f123e2" containerName="extract-content" Oct 11 09:10:54 crc kubenswrapper[5016]: E1011 09:10:54.655461 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7abae00d-21df-40ad-a465-559228f123e2" containerName="registry-server" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.655472 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="7abae00d-21df-40ad-a465-559228f123e2" containerName="registry-server" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.655847 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="7abae00d-21df-40ad-a465-559228f123e2" containerName="registry-server" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.657722 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.671627 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxv9s"] Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.721788 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-utilities\") pod \"certified-operators-qxv9s\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.721838 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vws5\" (UniqueName: \"kubernetes.io/projected/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-kube-api-access-9vws5\") pod \"certified-operators-qxv9s\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.721973 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-catalog-content\") pod \"certified-operators-qxv9s\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.775769 5016 generic.go:334] "Generic (PLEG): container finished" podID="7abae00d-21df-40ad-a465-559228f123e2" containerID="98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0" exitCode=0 Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.775842 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxm4z" event={"ID":"7abae00d-21df-40ad-a465-559228f123e2","Type":"ContainerDied","Data":"98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0"} Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.775939 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pxm4z" event={"ID":"7abae00d-21df-40ad-a465-559228f123e2","Type":"ContainerDied","Data":"90133551aea9d213ac3bb107996365234fa79144ee52efe73c97cb51de478ae0"} Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.775874 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pxm4z" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.775975 5016 scope.go:117] "RemoveContainer" containerID="98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.809744 5016 scope.go:117] "RemoveContainer" containerID="97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.818778 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pxm4z"] Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.826255 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-catalog-content\") pod \"certified-operators-qxv9s\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.826363 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-utilities\") pod \"certified-operators-qxv9s\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.826393 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vws5\" (UniqueName: \"kubernetes.io/projected/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-kube-api-access-9vws5\") pod \"certified-operators-qxv9s\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.826676 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pxm4z"] Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.827245 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-catalog-content\") pod \"certified-operators-qxv9s\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.827314 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-utilities\") pod \"certified-operators-qxv9s\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.836737 5016 scope.go:117] "RemoveContainer" containerID="29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.849215 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vws5\" (UniqueName: \"kubernetes.io/projected/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-kube-api-access-9vws5\") pod \"certified-operators-qxv9s\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.907048 5016 scope.go:117] "RemoveContainer" containerID="98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0" Oct 11 09:10:54 crc kubenswrapper[5016]: E1011 09:10:54.907757 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0\": container with ID starting with 98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0 not found: ID does not exist" containerID="98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.907794 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0"} err="failed to get container status \"98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0\": rpc error: code = NotFound desc = could not find container \"98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0\": container with ID starting with 98bf06e4696b6c1bedf535cd866662cb615d815b6a97a12216a889481b60b6b0 not found: ID does not exist" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.907820 5016 scope.go:117] "RemoveContainer" containerID="97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8" Oct 11 09:10:54 crc kubenswrapper[5016]: E1011 09:10:54.909292 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8\": container with ID starting with 97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8 not found: ID does not exist" containerID="97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.909358 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8"} err="failed to get container status \"97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8\": rpc error: code = NotFound desc = could not find container \"97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8\": container with ID starting with 97d6faf93b605391a10325039a804a8e30204f9c0a672c5d0f8a1620c184d4f8 not found: ID does not exist" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.909396 5016 scope.go:117] "RemoveContainer" containerID="29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084" Oct 11 09:10:54 crc kubenswrapper[5016]: E1011 09:10:54.912570 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084\": container with ID starting with 29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084 not found: ID does not exist" containerID="29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.912634 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084"} err="failed to get container status \"29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084\": rpc error: code = NotFound desc = could not find container \"29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084\": container with ID starting with 29b3f336cd6c536ce874ab77fdb01e52934b23a8ed470bdf8d48f3b28184f084 not found: ID does not exist" Oct 11 09:10:54 crc kubenswrapper[5016]: I1011 09:10:54.978123 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:10:55 crc kubenswrapper[5016]: I1011 09:10:55.160135 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7abae00d-21df-40ad-a465-559228f123e2" path="/var/lib/kubelet/pods/7abae00d-21df-40ad-a465-559228f123e2/volumes" Oct 11 09:10:55 crc kubenswrapper[5016]: I1011 09:10:55.500478 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxv9s"] Oct 11 09:10:55 crc kubenswrapper[5016]: I1011 09:10:55.786613 5016 generic.go:334] "Generic (PLEG): container finished" podID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerID="aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed" exitCode=0 Oct 11 09:10:55 crc kubenswrapper[5016]: I1011 09:10:55.786708 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxv9s" event={"ID":"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b","Type":"ContainerDied","Data":"aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed"} Oct 11 09:10:55 crc kubenswrapper[5016]: I1011 09:10:55.786774 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxv9s" event={"ID":"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b","Type":"ContainerStarted","Data":"719ded9fa0ee8ac53eeca97d577c7a060073c236473d130bbd626f5cd6294877"} Oct 11 09:10:57 crc kubenswrapper[5016]: I1011 09:10:57.819104 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxv9s" event={"ID":"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b","Type":"ContainerStarted","Data":"2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57"} Oct 11 09:10:58 crc kubenswrapper[5016]: I1011 09:10:58.834222 5016 generic.go:334] "Generic (PLEG): container finished" podID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerID="2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57" exitCode=0 Oct 11 09:10:58 crc kubenswrapper[5016]: I1011 09:10:58.834287 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxv9s" event={"ID":"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b","Type":"ContainerDied","Data":"2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57"} Oct 11 09:11:00 crc kubenswrapper[5016]: I1011 09:11:00.858685 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxv9s" event={"ID":"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b","Type":"ContainerStarted","Data":"3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c"} Oct 11 09:11:00 crc kubenswrapper[5016]: I1011 09:11:00.900786 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qxv9s" podStartSLOduration=3.416270177 podStartE2EDuration="6.900747485s" podCreationTimestamp="2025-10-11 09:10:54 +0000 UTC" firstStartedPulling="2025-10-11 09:10:55.788903695 +0000 UTC m=+5443.689359641" lastFinishedPulling="2025-10-11 09:10:59.273381003 +0000 UTC m=+5447.173836949" observedRunningTime="2025-10-11 09:11:00.882456359 +0000 UTC m=+5448.782912355" watchObservedRunningTime="2025-10-11 09:11:00.900747485 +0000 UTC m=+5448.801203471" Oct 11 09:11:04 crc kubenswrapper[5016]: I1011 09:11:04.978501 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:11:04 crc kubenswrapper[5016]: I1011 09:11:04.979431 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:11:05 crc kubenswrapper[5016]: I1011 09:11:05.042618 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:11:05 crc kubenswrapper[5016]: I1011 09:11:05.974871 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:11:06 crc kubenswrapper[5016]: I1011 09:11:06.031981 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxv9s"] Oct 11 09:11:07 crc kubenswrapper[5016]: I1011 09:11:07.122405 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:11:07 crc kubenswrapper[5016]: I1011 09:11:07.124418 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:11:07 crc kubenswrapper[5016]: I1011 09:11:07.931613 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qxv9s" podUID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerName="registry-server" containerID="cri-o://3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c" gracePeriod=2 Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.569436 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.629053 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-utilities\") pod \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.629533 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-catalog-content\") pod \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.630006 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vws5\" (UniqueName: \"kubernetes.io/projected/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-kube-api-access-9vws5\") pod \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\" (UID: \"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b\") " Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.630754 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-utilities" (OuterVolumeSpecName: "utilities") pod "4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" (UID: "4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.643040 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-kube-api-access-9vws5" (OuterVolumeSpecName: "kube-api-access-9vws5") pod "4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" (UID: "4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b"). InnerVolumeSpecName "kube-api-access-9vws5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.698042 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" (UID: "4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.743141 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.743173 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vws5\" (UniqueName: \"kubernetes.io/projected/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-kube-api-access-9vws5\") on node \"crc\" DevicePath \"\"" Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.743184 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.970083 5016 generic.go:334] "Generic (PLEG): container finished" podID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerID="3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c" exitCode=0 Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.970124 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxv9s" event={"ID":"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b","Type":"ContainerDied","Data":"3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c"} Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.970155 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxv9s" event={"ID":"4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b","Type":"ContainerDied","Data":"719ded9fa0ee8ac53eeca97d577c7a060073c236473d130bbd626f5cd6294877"} Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.970174 5016 scope.go:117] "RemoveContainer" containerID="3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c" Oct 11 09:11:08 crc kubenswrapper[5016]: I1011 09:11:08.970314 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxv9s" Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.009809 5016 scope.go:117] "RemoveContainer" containerID="2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57" Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.024420 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxv9s"] Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.034990 5016 scope.go:117] "RemoveContainer" containerID="aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed" Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.035334 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qxv9s"] Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.078073 5016 scope.go:117] "RemoveContainer" containerID="3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c" Oct 11 09:11:09 crc kubenswrapper[5016]: E1011 09:11:09.078904 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c\": container with ID starting with 3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c not found: ID does not exist" containerID="3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c" Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.078954 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c"} err="failed to get container status \"3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c\": rpc error: code = NotFound desc = could not find container \"3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c\": container with ID starting with 3d43cec3b1e4b5261a411593bb52662465d566c9fed7d054fd147231b9eee34c not found: ID does not exist" Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.078988 5016 scope.go:117] "RemoveContainer" containerID="2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57" Oct 11 09:11:09 crc kubenswrapper[5016]: E1011 09:11:09.079500 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57\": container with ID starting with 2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57 not found: ID does not exist" containerID="2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57" Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.079530 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57"} err="failed to get container status \"2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57\": rpc error: code = NotFound desc = could not find container \"2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57\": container with ID starting with 2fc852ce8031fb257dd3f24b1ede28f55f57a026727722c0c068155ef3aa6e57 not found: ID does not exist" Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.079546 5016 scope.go:117] "RemoveContainer" containerID="aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed" Oct 11 09:11:09 crc kubenswrapper[5016]: E1011 09:11:09.079763 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed\": container with ID starting with aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed not found: ID does not exist" containerID="aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed" Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.079783 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed"} err="failed to get container status \"aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed\": rpc error: code = NotFound desc = could not find container \"aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed\": container with ID starting with aca0ddf7d21a466a75430001f7a990b21257a45f21225f7a077fcc5dfe8160ed not found: ID does not exist" Oct 11 09:11:09 crc kubenswrapper[5016]: I1011 09:11:09.144709 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" path="/var/lib/kubelet/pods/4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b/volumes" Oct 11 09:11:37 crc kubenswrapper[5016]: I1011 09:11:37.121746 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:11:37 crc kubenswrapper[5016]: I1011 09:11:37.123681 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:11:37 crc kubenswrapper[5016]: I1011 09:11:37.123888 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:11:37 crc kubenswrapper[5016]: I1011 09:11:37.127148 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc778842f389fa8f3b854900cc0858ca0e3a8880cb30651243c8816ce3908738"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:11:37 crc kubenswrapper[5016]: I1011 09:11:37.127322 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://fc778842f389fa8f3b854900cc0858ca0e3a8880cb30651243c8816ce3908738" gracePeriod=600 Oct 11 09:11:37 crc kubenswrapper[5016]: E1011 09:11:37.333503 5016 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0633ed26_7b6a_4a20_92ba_569891d9faff.slice/crio-conmon-fc778842f389fa8f3b854900cc0858ca0e3a8880cb30651243c8816ce3908738.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0633ed26_7b6a_4a20_92ba_569891d9faff.slice/crio-fc778842f389fa8f3b854900cc0858ca0e3a8880cb30651243c8816ce3908738.scope\": RecentStats: unable to find data in memory cache]" Oct 11 09:11:37 crc kubenswrapper[5016]: I1011 09:11:37.334037 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="fc778842f389fa8f3b854900cc0858ca0e3a8880cb30651243c8816ce3908738" exitCode=0 Oct 11 09:11:37 crc kubenswrapper[5016]: I1011 09:11:37.334126 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"fc778842f389fa8f3b854900cc0858ca0e3a8880cb30651243c8816ce3908738"} Oct 11 09:11:37 crc kubenswrapper[5016]: I1011 09:11:37.334202 5016 scope.go:117] "RemoveContainer" containerID="dd25512f539b5e85101909fc5ec681bad8cf36649ce0aa2db91df7f66ade5955" Oct 11 09:11:38 crc kubenswrapper[5016]: I1011 09:11:38.351202 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc"} Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.276705 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qbvwr"] Oct 11 09:12:32 crc kubenswrapper[5016]: E1011 09:12:32.277686 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerName="registry-server" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.277699 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerName="registry-server" Oct 11 09:12:32 crc kubenswrapper[5016]: E1011 09:12:32.277717 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerName="extract-utilities" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.277723 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerName="extract-utilities" Oct 11 09:12:32 crc kubenswrapper[5016]: E1011 09:12:32.277744 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerName="extract-content" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.277750 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerName="extract-content" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.277927 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac1bf3d-b2b1-4c75-bfa4-346efa818f1b" containerName="registry-server" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.279303 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.303031 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-utilities\") pod \"redhat-operators-qbvwr\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.303084 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-catalog-content\") pod \"redhat-operators-qbvwr\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.303718 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5qhr\" (UniqueName: \"kubernetes.io/projected/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-kube-api-access-c5qhr\") pod \"redhat-operators-qbvwr\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.311784 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qbvwr"] Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.406112 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-utilities\") pod \"redhat-operators-qbvwr\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.406180 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-catalog-content\") pod \"redhat-operators-qbvwr\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.406987 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5qhr\" (UniqueName: \"kubernetes.io/projected/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-kube-api-access-c5qhr\") pod \"redhat-operators-qbvwr\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.407213 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-catalog-content\") pod \"redhat-operators-qbvwr\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.407226 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-utilities\") pod \"redhat-operators-qbvwr\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.453601 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5qhr\" (UniqueName: \"kubernetes.io/projected/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-kube-api-access-c5qhr\") pod \"redhat-operators-qbvwr\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:32 crc kubenswrapper[5016]: I1011 09:12:32.623523 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:33 crc kubenswrapper[5016]: I1011 09:12:33.153996 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qbvwr"] Oct 11 09:12:34 crc kubenswrapper[5016]: I1011 09:12:34.022966 5016 generic.go:334] "Generic (PLEG): container finished" podID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerID="a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff" exitCode=0 Oct 11 09:12:34 crc kubenswrapper[5016]: I1011 09:12:34.023326 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbvwr" event={"ID":"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4","Type":"ContainerDied","Data":"a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff"} Oct 11 09:12:34 crc kubenswrapper[5016]: I1011 09:12:34.023371 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbvwr" event={"ID":"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4","Type":"ContainerStarted","Data":"33c646de1d3782e7d912e0dd66419e6871a95124eb509feb7bbb6c838a8db88e"} Oct 11 09:12:35 crc kubenswrapper[5016]: I1011 09:12:35.038373 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbvwr" event={"ID":"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4","Type":"ContainerStarted","Data":"1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79"} Oct 11 09:12:40 crc kubenswrapper[5016]: I1011 09:12:40.101636 5016 generic.go:334] "Generic (PLEG): container finished" podID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerID="1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79" exitCode=0 Oct 11 09:12:40 crc kubenswrapper[5016]: I1011 09:12:40.101740 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbvwr" event={"ID":"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4","Type":"ContainerDied","Data":"1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79"} Oct 11 09:12:41 crc kubenswrapper[5016]: I1011 09:12:41.116030 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbvwr" event={"ID":"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4","Type":"ContainerStarted","Data":"52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2"} Oct 11 09:12:41 crc kubenswrapper[5016]: I1011 09:12:41.148122 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qbvwr" podStartSLOduration=2.635232345 podStartE2EDuration="9.148098283s" podCreationTimestamp="2025-10-11 09:12:32 +0000 UTC" firstStartedPulling="2025-10-11 09:12:34.028732316 +0000 UTC m=+5541.929188262" lastFinishedPulling="2025-10-11 09:12:40.541598224 +0000 UTC m=+5548.442054200" observedRunningTime="2025-10-11 09:12:41.138416146 +0000 UTC m=+5549.038872102" watchObservedRunningTime="2025-10-11 09:12:41.148098283 +0000 UTC m=+5549.048554249" Oct 11 09:12:42 crc kubenswrapper[5016]: I1011 09:12:42.625095 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:42 crc kubenswrapper[5016]: I1011 09:12:42.625540 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:12:43 crc kubenswrapper[5016]: I1011 09:12:43.701401 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qbvwr" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="registry-server" probeResult="failure" output=< Oct 11 09:12:43 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 09:12:43 crc kubenswrapper[5016]: > Oct 11 09:12:53 crc kubenswrapper[5016]: I1011 09:12:53.725553 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qbvwr" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="registry-server" probeResult="failure" output=< Oct 11 09:12:53 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 09:12:53 crc kubenswrapper[5016]: > Oct 11 09:13:02 crc kubenswrapper[5016]: I1011 09:13:02.709528 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:13:02 crc kubenswrapper[5016]: I1011 09:13:02.764157 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:13:03 crc kubenswrapper[5016]: I1011 09:13:03.476764 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qbvwr"] Oct 11 09:13:04 crc kubenswrapper[5016]: I1011 09:13:04.353918 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qbvwr" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="registry-server" containerID="cri-o://52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2" gracePeriod=2 Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.055087 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.232068 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-utilities\") pod \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.233358 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-catalog-content\") pod \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.233678 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5qhr\" (UniqueName: \"kubernetes.io/projected/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-kube-api-access-c5qhr\") pod \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\" (UID: \"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4\") " Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.234162 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-utilities" (OuterVolumeSpecName: "utilities") pod "f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" (UID: "f3c13787-ba08-40c1-8bb9-f80fbce0a4f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.234985 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.244134 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-kube-api-access-c5qhr" (OuterVolumeSpecName: "kube-api-access-c5qhr") pod "f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" (UID: "f3c13787-ba08-40c1-8bb9-f80fbce0a4f4"). InnerVolumeSpecName "kube-api-access-c5qhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.338994 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5qhr\" (UniqueName: \"kubernetes.io/projected/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-kube-api-access-c5qhr\") on node \"crc\" DevicePath \"\"" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.345667 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" (UID: "f3c13787-ba08-40c1-8bb9-f80fbce0a4f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.369356 5016 generic.go:334] "Generic (PLEG): container finished" podID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerID="52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2" exitCode=0 Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.369421 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbvwr" event={"ID":"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4","Type":"ContainerDied","Data":"52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2"} Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.369458 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbvwr" event={"ID":"f3c13787-ba08-40c1-8bb9-f80fbce0a4f4","Type":"ContainerDied","Data":"33c646de1d3782e7d912e0dd66419e6871a95124eb509feb7bbb6c838a8db88e"} Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.369505 5016 scope.go:117] "RemoveContainer" containerID="52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.369519 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbvwr" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.430617 5016 scope.go:117] "RemoveContainer" containerID="1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.433560 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qbvwr"] Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.440414 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qbvwr"] Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.442969 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.467292 5016 scope.go:117] "RemoveContainer" containerID="a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.521289 5016 scope.go:117] "RemoveContainer" containerID="52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2" Oct 11 09:13:05 crc kubenswrapper[5016]: E1011 09:13:05.522011 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2\": container with ID starting with 52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2 not found: ID does not exist" containerID="52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.522074 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2"} err="failed to get container status \"52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2\": rpc error: code = NotFound desc = could not find container \"52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2\": container with ID starting with 52af6866ccd2e9e4ffb8eefdbec3025bea69f4c01c5f8fcb967c8eaf3b3e96a2 not found: ID does not exist" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.522114 5016 scope.go:117] "RemoveContainer" containerID="1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79" Oct 11 09:13:05 crc kubenswrapper[5016]: E1011 09:13:05.522728 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79\": container with ID starting with 1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79 not found: ID does not exist" containerID="1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.522794 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79"} err="failed to get container status \"1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79\": rpc error: code = NotFound desc = could not find container \"1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79\": container with ID starting with 1fb4329df59a5e04aed159b23244e494ffc029e72fb207bfdcc0acf2405b6a79 not found: ID does not exist" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.522836 5016 scope.go:117] "RemoveContainer" containerID="a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff" Oct 11 09:13:05 crc kubenswrapper[5016]: E1011 09:13:05.523193 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff\": container with ID starting with a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff not found: ID does not exist" containerID="a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff" Oct 11 09:13:05 crc kubenswrapper[5016]: I1011 09:13:05.523238 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff"} err="failed to get container status \"a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff\": rpc error: code = NotFound desc = could not find container \"a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff\": container with ID starting with a9f9323f67e558fe39cd957150b45779d820def99c411c70daf28dab512723ff not found: ID does not exist" Oct 11 09:13:07 crc kubenswrapper[5016]: I1011 09:13:07.149808 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" path="/var/lib/kubelet/pods/f3c13787-ba08-40c1-8bb9-f80fbce0a4f4/volumes" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.299741 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-czvx6"] Oct 11 09:13:35 crc kubenswrapper[5016]: E1011 09:13:35.301257 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="extract-content" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.301276 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="extract-content" Oct 11 09:13:35 crc kubenswrapper[5016]: E1011 09:13:35.301324 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="extract-utilities" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.301333 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="extract-utilities" Oct 11 09:13:35 crc kubenswrapper[5016]: E1011 09:13:35.301357 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="registry-server" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.301364 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="registry-server" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.301686 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3c13787-ba08-40c1-8bb9-f80fbce0a4f4" containerName="registry-server" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.304827 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.310874 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-czvx6"] Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.353710 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-catalog-content\") pod \"redhat-marketplace-czvx6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.353840 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5sn4\" (UniqueName: \"kubernetes.io/projected/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-kube-api-access-r5sn4\") pod \"redhat-marketplace-czvx6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.353865 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-utilities\") pod \"redhat-marketplace-czvx6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.456807 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-utilities\") pod \"redhat-marketplace-czvx6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.457098 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-catalog-content\") pod \"redhat-marketplace-czvx6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.457285 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5sn4\" (UniqueName: \"kubernetes.io/projected/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-kube-api-access-r5sn4\") pod \"redhat-marketplace-czvx6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.457643 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-utilities\") pod \"redhat-marketplace-czvx6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.458043 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-catalog-content\") pod \"redhat-marketplace-czvx6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.481510 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5sn4\" (UniqueName: \"kubernetes.io/projected/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-kube-api-access-r5sn4\") pod \"redhat-marketplace-czvx6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:35 crc kubenswrapper[5016]: I1011 09:13:35.634439 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:36 crc kubenswrapper[5016]: I1011 09:13:36.170627 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-czvx6"] Oct 11 09:13:36 crc kubenswrapper[5016]: I1011 09:13:36.763366 5016 generic.go:334] "Generic (PLEG): container finished" podID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerID="a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb" exitCode=0 Oct 11 09:13:36 crc kubenswrapper[5016]: I1011 09:13:36.763446 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czvx6" event={"ID":"68b51ee2-70f9-4f27-ae1e-4848bdceeef6","Type":"ContainerDied","Data":"a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb"} Oct 11 09:13:36 crc kubenswrapper[5016]: I1011 09:13:36.763849 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czvx6" event={"ID":"68b51ee2-70f9-4f27-ae1e-4848bdceeef6","Type":"ContainerStarted","Data":"b3477f4bfe737d3a097871d6fe32de4e9d0001ab46ba2cbec958ff1b5ec46bc2"} Oct 11 09:13:37 crc kubenswrapper[5016]: I1011 09:13:37.121974 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:13:37 crc kubenswrapper[5016]: I1011 09:13:37.122439 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:13:37 crc kubenswrapper[5016]: I1011 09:13:37.786305 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czvx6" event={"ID":"68b51ee2-70f9-4f27-ae1e-4848bdceeef6","Type":"ContainerStarted","Data":"3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99"} Oct 11 09:13:38 crc kubenswrapper[5016]: I1011 09:13:38.804718 5016 generic.go:334] "Generic (PLEG): container finished" podID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerID="3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99" exitCode=0 Oct 11 09:13:38 crc kubenswrapper[5016]: I1011 09:13:38.804785 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czvx6" event={"ID":"68b51ee2-70f9-4f27-ae1e-4848bdceeef6","Type":"ContainerDied","Data":"3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99"} Oct 11 09:13:40 crc kubenswrapper[5016]: I1011 09:13:40.833001 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czvx6" event={"ID":"68b51ee2-70f9-4f27-ae1e-4848bdceeef6","Type":"ContainerStarted","Data":"6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f"} Oct 11 09:13:40 crc kubenswrapper[5016]: I1011 09:13:40.867181 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-czvx6" podStartSLOduration=3.360137047 podStartE2EDuration="5.867151965s" podCreationTimestamp="2025-10-11 09:13:35 +0000 UTC" firstStartedPulling="2025-10-11 09:13:36.766050658 +0000 UTC m=+5604.666506614" lastFinishedPulling="2025-10-11 09:13:39.273065546 +0000 UTC m=+5607.173521532" observedRunningTime="2025-10-11 09:13:40.85795296 +0000 UTC m=+5608.758408916" watchObservedRunningTime="2025-10-11 09:13:40.867151965 +0000 UTC m=+5608.767607921" Oct 11 09:13:45 crc kubenswrapper[5016]: I1011 09:13:45.635521 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:45 crc kubenswrapper[5016]: I1011 09:13:45.636453 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:45 crc kubenswrapper[5016]: I1011 09:13:45.741251 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:45 crc kubenswrapper[5016]: I1011 09:13:45.983758 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:46 crc kubenswrapper[5016]: I1011 09:13:46.056528 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-czvx6"] Oct 11 09:13:47 crc kubenswrapper[5016]: I1011 09:13:47.923599 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-czvx6" podUID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerName="registry-server" containerID="cri-o://6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f" gracePeriod=2 Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.589978 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.729720 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-catalog-content\") pod \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.730385 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-utilities\") pod \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.730542 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5sn4\" (UniqueName: \"kubernetes.io/projected/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-kube-api-access-r5sn4\") pod \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\" (UID: \"68b51ee2-70f9-4f27-ae1e-4848bdceeef6\") " Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.731425 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-utilities" (OuterVolumeSpecName: "utilities") pod "68b51ee2-70f9-4f27-ae1e-4848bdceeef6" (UID: "68b51ee2-70f9-4f27-ae1e-4848bdceeef6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.738300 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-kube-api-access-r5sn4" (OuterVolumeSpecName: "kube-api-access-r5sn4") pod "68b51ee2-70f9-4f27-ae1e-4848bdceeef6" (UID: "68b51ee2-70f9-4f27-ae1e-4848bdceeef6"). InnerVolumeSpecName "kube-api-access-r5sn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.744817 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68b51ee2-70f9-4f27-ae1e-4848bdceeef6" (UID: "68b51ee2-70f9-4f27-ae1e-4848bdceeef6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.833156 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.833195 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5sn4\" (UniqueName: \"kubernetes.io/projected/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-kube-api-access-r5sn4\") on node \"crc\" DevicePath \"\"" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.833207 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b51ee2-70f9-4f27-ae1e-4848bdceeef6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.943158 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-czvx6" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.946057 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czvx6" event={"ID":"68b51ee2-70f9-4f27-ae1e-4848bdceeef6","Type":"ContainerDied","Data":"6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f"} Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.946391 5016 scope.go:117] "RemoveContainer" containerID="6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.943123 5016 generic.go:334] "Generic (PLEG): container finished" podID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerID="6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f" exitCode=0 Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.950788 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-czvx6" event={"ID":"68b51ee2-70f9-4f27-ae1e-4848bdceeef6","Type":"ContainerDied","Data":"b3477f4bfe737d3a097871d6fe32de4e9d0001ab46ba2cbec958ff1b5ec46bc2"} Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.983284 5016 scope.go:117] "RemoveContainer" containerID="3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99" Oct 11 09:13:48 crc kubenswrapper[5016]: I1011 09:13:48.991499 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-czvx6"] Oct 11 09:13:49 crc kubenswrapper[5016]: I1011 09:13:48.999684 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-czvx6"] Oct 11 09:13:49 crc kubenswrapper[5016]: I1011 09:13:49.029055 5016 scope.go:117] "RemoveContainer" containerID="a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb" Oct 11 09:13:49 crc kubenswrapper[5016]: I1011 09:13:49.087072 5016 scope.go:117] "RemoveContainer" containerID="6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f" Oct 11 09:13:49 crc kubenswrapper[5016]: E1011 09:13:49.087682 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f\": container with ID starting with 6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f not found: ID does not exist" containerID="6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f" Oct 11 09:13:49 crc kubenswrapper[5016]: I1011 09:13:49.087768 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f"} err="failed to get container status \"6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f\": rpc error: code = NotFound desc = could not find container \"6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f\": container with ID starting with 6c55a972ca04336ba205839e6ea7dc00e38f9f3356a994bb166171756598723f not found: ID does not exist" Oct 11 09:13:49 crc kubenswrapper[5016]: I1011 09:13:49.087810 5016 scope.go:117] "RemoveContainer" containerID="3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99" Oct 11 09:13:49 crc kubenswrapper[5016]: E1011 09:13:49.088209 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99\": container with ID starting with 3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99 not found: ID does not exist" containerID="3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99" Oct 11 09:13:49 crc kubenswrapper[5016]: I1011 09:13:49.088252 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99"} err="failed to get container status \"3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99\": rpc error: code = NotFound desc = could not find container \"3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99\": container with ID starting with 3b0ed7ac5bc57a373d2d845435fb79d4425bd2e5a4de049aac08d42a6c55ce99 not found: ID does not exist" Oct 11 09:13:49 crc kubenswrapper[5016]: I1011 09:13:49.088285 5016 scope.go:117] "RemoveContainer" containerID="a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb" Oct 11 09:13:49 crc kubenswrapper[5016]: E1011 09:13:49.088630 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb\": container with ID starting with a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb not found: ID does not exist" containerID="a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb" Oct 11 09:13:49 crc kubenswrapper[5016]: I1011 09:13:49.088688 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb"} err="failed to get container status \"a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb\": rpc error: code = NotFound desc = could not find container \"a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb\": container with ID starting with a61d884683faa4fe21138e6b7a0422976e9521c5d0bbed1a0c581b0ec98e4efb not found: ID does not exist" Oct 11 09:13:49 crc kubenswrapper[5016]: I1011 09:13:49.155150 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" path="/var/lib/kubelet/pods/68b51ee2-70f9-4f27-ae1e-4848bdceeef6/volumes" Oct 11 09:14:07 crc kubenswrapper[5016]: I1011 09:14:07.122614 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:14:07 crc kubenswrapper[5016]: I1011 09:14:07.124003 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:14:37 crc kubenswrapper[5016]: I1011 09:14:37.122588 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:14:37 crc kubenswrapper[5016]: I1011 09:14:37.123490 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:14:37 crc kubenswrapper[5016]: I1011 09:14:37.123550 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:14:37 crc kubenswrapper[5016]: I1011 09:14:37.124571 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:14:37 crc kubenswrapper[5016]: I1011 09:14:37.124632 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" gracePeriod=600 Oct 11 09:14:37 crc kubenswrapper[5016]: E1011 09:14:37.252192 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:14:37 crc kubenswrapper[5016]: I1011 09:14:37.577490 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" exitCode=0 Oct 11 09:14:37 crc kubenswrapper[5016]: I1011 09:14:37.577547 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc"} Oct 11 09:14:37 crc kubenswrapper[5016]: I1011 09:14:37.577685 5016 scope.go:117] "RemoveContainer" containerID="fc778842f389fa8f3b854900cc0858ca0e3a8880cb30651243c8816ce3908738" Oct 11 09:14:37 crc kubenswrapper[5016]: I1011 09:14:37.578745 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:14:37 crc kubenswrapper[5016]: E1011 09:14:37.579142 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:14:53 crc kubenswrapper[5016]: I1011 09:14:53.144884 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:14:53 crc kubenswrapper[5016]: E1011 09:14:53.146261 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.160114 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8"] Oct 11 09:15:00 crc kubenswrapper[5016]: E1011 09:15:00.161727 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerName="registry-server" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.161743 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerName="registry-server" Oct 11 09:15:00 crc kubenswrapper[5016]: E1011 09:15:00.161779 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerName="extract-utilities" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.161786 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerName="extract-utilities" Oct 11 09:15:00 crc kubenswrapper[5016]: E1011 09:15:00.161803 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerName="extract-content" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.161810 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerName="extract-content" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.162024 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b51ee2-70f9-4f27-ae1e-4848bdceeef6" containerName="registry-server" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.162834 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.166170 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.175282 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.180791 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8"] Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.254801 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf704980-48d5-4212-8732-4c83679347d4-secret-volume\") pod \"collect-profiles-29336235-jdwf8\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.255150 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwsgv\" (UniqueName: \"kubernetes.io/projected/bf704980-48d5-4212-8732-4c83679347d4-kube-api-access-bwsgv\") pod \"collect-profiles-29336235-jdwf8\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.255492 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf704980-48d5-4212-8732-4c83679347d4-config-volume\") pod \"collect-profiles-29336235-jdwf8\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.357594 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf704980-48d5-4212-8732-4c83679347d4-secret-volume\") pod \"collect-profiles-29336235-jdwf8\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.357712 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwsgv\" (UniqueName: \"kubernetes.io/projected/bf704980-48d5-4212-8732-4c83679347d4-kube-api-access-bwsgv\") pod \"collect-profiles-29336235-jdwf8\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.357811 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf704980-48d5-4212-8732-4c83679347d4-config-volume\") pod \"collect-profiles-29336235-jdwf8\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.362585 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf704980-48d5-4212-8732-4c83679347d4-config-volume\") pod \"collect-profiles-29336235-jdwf8\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.366594 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf704980-48d5-4212-8732-4c83679347d4-secret-volume\") pod \"collect-profiles-29336235-jdwf8\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.376816 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwsgv\" (UniqueName: \"kubernetes.io/projected/bf704980-48d5-4212-8732-4c83679347d4-kube-api-access-bwsgv\") pod \"collect-profiles-29336235-jdwf8\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.504582 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.816613 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8"] Oct 11 09:15:00 crc kubenswrapper[5016]: I1011 09:15:00.847752 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" event={"ID":"bf704980-48d5-4212-8732-4c83679347d4","Type":"ContainerStarted","Data":"705967ad1cca1a1d9d2a8365b458944f98d50fd6614e1f8e1a91a28cd6fd5579"} Oct 11 09:15:01 crc kubenswrapper[5016]: I1011 09:15:01.859683 5016 generic.go:334] "Generic (PLEG): container finished" podID="bf704980-48d5-4212-8732-4c83679347d4" containerID="d403e123d85fdf45cd685d46ef6d03e9eaac90268e3f92e278231d2525359692" exitCode=0 Oct 11 09:15:01 crc kubenswrapper[5016]: I1011 09:15:01.859782 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" event={"ID":"bf704980-48d5-4212-8732-4c83679347d4","Type":"ContainerDied","Data":"d403e123d85fdf45cd685d46ef6d03e9eaac90268e3f92e278231d2525359692"} Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.499720 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.530402 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwsgv\" (UniqueName: \"kubernetes.io/projected/bf704980-48d5-4212-8732-4c83679347d4-kube-api-access-bwsgv\") pod \"bf704980-48d5-4212-8732-4c83679347d4\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.530500 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf704980-48d5-4212-8732-4c83679347d4-secret-volume\") pod \"bf704980-48d5-4212-8732-4c83679347d4\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.530559 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf704980-48d5-4212-8732-4c83679347d4-config-volume\") pod \"bf704980-48d5-4212-8732-4c83679347d4\" (UID: \"bf704980-48d5-4212-8732-4c83679347d4\") " Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.531906 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf704980-48d5-4212-8732-4c83679347d4-config-volume" (OuterVolumeSpecName: "config-volume") pod "bf704980-48d5-4212-8732-4c83679347d4" (UID: "bf704980-48d5-4212-8732-4c83679347d4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.538854 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf704980-48d5-4212-8732-4c83679347d4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bf704980-48d5-4212-8732-4c83679347d4" (UID: "bf704980-48d5-4212-8732-4c83679347d4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.538996 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf704980-48d5-4212-8732-4c83679347d4-kube-api-access-bwsgv" (OuterVolumeSpecName: "kube-api-access-bwsgv") pod "bf704980-48d5-4212-8732-4c83679347d4" (UID: "bf704980-48d5-4212-8732-4c83679347d4"). InnerVolumeSpecName "kube-api-access-bwsgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.634806 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwsgv\" (UniqueName: \"kubernetes.io/projected/bf704980-48d5-4212-8732-4c83679347d4-kube-api-access-bwsgv\") on node \"crc\" DevicePath \"\"" Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.634884 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf704980-48d5-4212-8732-4c83679347d4-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.634905 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf704980-48d5-4212-8732-4c83679347d4-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.895729 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" event={"ID":"bf704980-48d5-4212-8732-4c83679347d4","Type":"ContainerDied","Data":"705967ad1cca1a1d9d2a8365b458944f98d50fd6614e1f8e1a91a28cd6fd5579"} Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.895825 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="705967ad1cca1a1d9d2a8365b458944f98d50fd6614e1f8e1a91a28cd6fd5579" Oct 11 09:15:03 crc kubenswrapper[5016]: I1011 09:15:03.895849 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8" Oct 11 09:15:04 crc kubenswrapper[5016]: I1011 09:15:04.599827 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6"] Oct 11 09:15:04 crc kubenswrapper[5016]: I1011 09:15:04.619839 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336190-dhjp6"] Oct 11 09:15:05 crc kubenswrapper[5016]: I1011 09:15:05.149461 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="393c7fe6-f77e-45c2-bd5c-c3e762983abd" path="/var/lib/kubelet/pods/393c7fe6-f77e-45c2-bd5c-c3e762983abd/volumes" Oct 11 09:15:07 crc kubenswrapper[5016]: I1011 09:15:07.132948 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:15:07 crc kubenswrapper[5016]: E1011 09:15:07.134887 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:15:19 crc kubenswrapper[5016]: I1011 09:15:19.133885 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:15:19 crc kubenswrapper[5016]: E1011 09:15:19.135269 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:15:34 crc kubenswrapper[5016]: I1011 09:15:34.134195 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:15:34 crc kubenswrapper[5016]: E1011 09:15:34.136417 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:15:43 crc kubenswrapper[5016]: I1011 09:15:43.618423 5016 scope.go:117] "RemoveContainer" containerID="1218b25b36ee6b53a71dca403264d622a0e7930d4f4029997bdfc2bf598ea74e" Oct 11 09:15:45 crc kubenswrapper[5016]: I1011 09:15:45.133227 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:15:45 crc kubenswrapper[5016]: E1011 09:15:45.134247 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:15:56 crc kubenswrapper[5016]: I1011 09:15:56.133727 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:15:56 crc kubenswrapper[5016]: E1011 09:15:56.134807 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:16:08 crc kubenswrapper[5016]: I1011 09:16:08.134636 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:16:08 crc kubenswrapper[5016]: E1011 09:16:08.135526 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:16:20 crc kubenswrapper[5016]: I1011 09:16:20.132996 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:16:20 crc kubenswrapper[5016]: E1011 09:16:20.133857 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:16:31 crc kubenswrapper[5016]: I1011 09:16:31.135801 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:16:31 crc kubenswrapper[5016]: E1011 09:16:31.136766 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:16:45 crc kubenswrapper[5016]: I1011 09:16:45.133566 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:16:45 crc kubenswrapper[5016]: E1011 09:16:45.134583 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:16:58 crc kubenswrapper[5016]: I1011 09:16:58.134172 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:16:58 crc kubenswrapper[5016]: E1011 09:16:58.135447 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:17:11 crc kubenswrapper[5016]: I1011 09:17:11.133829 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:17:11 crc kubenswrapper[5016]: E1011 09:17:11.134998 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:17:23 crc kubenswrapper[5016]: I1011 09:17:23.133695 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:17:23 crc kubenswrapper[5016]: E1011 09:17:23.135492 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:17:35 crc kubenswrapper[5016]: I1011 09:17:35.133859 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:17:35 crc kubenswrapper[5016]: E1011 09:17:35.134766 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:17:46 crc kubenswrapper[5016]: I1011 09:17:46.134274 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:17:46 crc kubenswrapper[5016]: E1011 09:17:46.135126 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:17:59 crc kubenswrapper[5016]: I1011 09:17:59.133786 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:17:59 crc kubenswrapper[5016]: E1011 09:17:59.134625 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:18:13 crc kubenswrapper[5016]: I1011 09:18:13.140492 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:18:13 crc kubenswrapper[5016]: E1011 09:18:13.141421 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:18:27 crc kubenswrapper[5016]: I1011 09:18:27.134080 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:18:27 crc kubenswrapper[5016]: E1011 09:18:27.135158 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:18:39 crc kubenswrapper[5016]: I1011 09:18:39.133806 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:18:39 crc kubenswrapper[5016]: E1011 09:18:39.135204 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:18:54 crc kubenswrapper[5016]: I1011 09:18:54.133890 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:18:54 crc kubenswrapper[5016]: E1011 09:18:54.134676 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:19:07 crc kubenswrapper[5016]: I1011 09:19:07.135497 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:19:07 crc kubenswrapper[5016]: E1011 09:19:07.136739 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:19:20 crc kubenswrapper[5016]: I1011 09:19:20.133900 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:19:20 crc kubenswrapper[5016]: E1011 09:19:20.134843 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:19:34 crc kubenswrapper[5016]: I1011 09:19:34.134336 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:19:34 crc kubenswrapper[5016]: E1011 09:19:34.135196 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:19:46 crc kubenswrapper[5016]: I1011 09:19:46.134771 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:19:47 crc kubenswrapper[5016]: I1011 09:19:47.174095 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"1c04380bd4ad79990959e8607bacf5330d9b873f45e2d0ac548f9d74bd869ba0"} Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.203507 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q7wgf"] Oct 11 09:20:55 crc kubenswrapper[5016]: E1011 09:20:55.204529 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf704980-48d5-4212-8732-4c83679347d4" containerName="collect-profiles" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.204545 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf704980-48d5-4212-8732-4c83679347d4" containerName="collect-profiles" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.204835 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf704980-48d5-4212-8732-4c83679347d4" containerName="collect-profiles" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.206587 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.214008 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7wgf"] Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.313790 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95pmn\" (UniqueName: \"kubernetes.io/projected/af77999e-9cdb-40bb-8893-b4a83d032a70-kube-api-access-95pmn\") pod \"community-operators-q7wgf\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.313981 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-utilities\") pod \"community-operators-q7wgf\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.314157 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-catalog-content\") pod \"community-operators-q7wgf\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.416238 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95pmn\" (UniqueName: \"kubernetes.io/projected/af77999e-9cdb-40bb-8893-b4a83d032a70-kube-api-access-95pmn\") pod \"community-operators-q7wgf\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.416325 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-utilities\") pod \"community-operators-q7wgf\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.416386 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-catalog-content\") pod \"community-operators-q7wgf\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.416947 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-utilities\") pod \"community-operators-q7wgf\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.416973 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-catalog-content\") pod \"community-operators-q7wgf\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.439821 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95pmn\" (UniqueName: \"kubernetes.io/projected/af77999e-9cdb-40bb-8893-b4a83d032a70-kube-api-access-95pmn\") pod \"community-operators-q7wgf\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:55 crc kubenswrapper[5016]: I1011 09:20:55.531288 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:20:56 crc kubenswrapper[5016]: I1011 09:20:56.074085 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q7wgf"] Oct 11 09:20:56 crc kubenswrapper[5016]: I1011 09:20:56.866633 5016 generic.go:334] "Generic (PLEG): container finished" podID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerID="e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc" exitCode=0 Oct 11 09:20:56 crc kubenswrapper[5016]: I1011 09:20:56.866808 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7wgf" event={"ID":"af77999e-9cdb-40bb-8893-b4a83d032a70","Type":"ContainerDied","Data":"e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc"} Oct 11 09:20:56 crc kubenswrapper[5016]: I1011 09:20:56.867060 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7wgf" event={"ID":"af77999e-9cdb-40bb-8893-b4a83d032a70","Type":"ContainerStarted","Data":"d639f5c7fe72dd47dda2457bbfb8486e4333894be5e6991bf0c805c3dbdac26d"} Oct 11 09:20:56 crc kubenswrapper[5016]: I1011 09:20:56.870730 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 09:20:58 crc kubenswrapper[5016]: I1011 09:20:58.890468 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7wgf" event={"ID":"af77999e-9cdb-40bb-8893-b4a83d032a70","Type":"ContainerStarted","Data":"65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4"} Oct 11 09:21:01 crc kubenswrapper[5016]: I1011 09:21:01.923545 5016 generic.go:334] "Generic (PLEG): container finished" podID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerID="65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4" exitCode=0 Oct 11 09:21:01 crc kubenswrapper[5016]: I1011 09:21:01.923589 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7wgf" event={"ID":"af77999e-9cdb-40bb-8893-b4a83d032a70","Type":"ContainerDied","Data":"65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4"} Oct 11 09:21:02 crc kubenswrapper[5016]: I1011 09:21:02.941965 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7wgf" event={"ID":"af77999e-9cdb-40bb-8893-b4a83d032a70","Type":"ContainerStarted","Data":"8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be"} Oct 11 09:21:02 crc kubenswrapper[5016]: I1011 09:21:02.966816 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q7wgf" podStartSLOduration=2.332904105 podStartE2EDuration="7.966789536s" podCreationTimestamp="2025-10-11 09:20:55 +0000 UTC" firstStartedPulling="2025-10-11 09:20:56.87026472 +0000 UTC m=+6044.770720676" lastFinishedPulling="2025-10-11 09:21:02.504150121 +0000 UTC m=+6050.404606107" observedRunningTime="2025-10-11 09:21:02.963247203 +0000 UTC m=+6050.863703149" watchObservedRunningTime="2025-10-11 09:21:02.966789536 +0000 UTC m=+6050.867245522" Oct 11 09:21:05 crc kubenswrapper[5016]: I1011 09:21:05.532239 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:21:05 crc kubenswrapper[5016]: I1011 09:21:05.532594 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:21:05 crc kubenswrapper[5016]: I1011 09:21:05.590866 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:21:15 crc kubenswrapper[5016]: I1011 09:21:15.614338 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:21:15 crc kubenswrapper[5016]: I1011 09:21:15.698355 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7wgf"] Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.070588 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q7wgf" podUID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerName="registry-server" containerID="cri-o://8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be" gracePeriod=2 Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.628553 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.790960 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95pmn\" (UniqueName: \"kubernetes.io/projected/af77999e-9cdb-40bb-8893-b4a83d032a70-kube-api-access-95pmn\") pod \"af77999e-9cdb-40bb-8893-b4a83d032a70\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.791044 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-catalog-content\") pod \"af77999e-9cdb-40bb-8893-b4a83d032a70\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.791118 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-utilities\") pod \"af77999e-9cdb-40bb-8893-b4a83d032a70\" (UID: \"af77999e-9cdb-40bb-8893-b4a83d032a70\") " Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.792451 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-utilities" (OuterVolumeSpecName: "utilities") pod "af77999e-9cdb-40bb-8893-b4a83d032a70" (UID: "af77999e-9cdb-40bb-8893-b4a83d032a70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.800090 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af77999e-9cdb-40bb-8893-b4a83d032a70-kube-api-access-95pmn" (OuterVolumeSpecName: "kube-api-access-95pmn") pod "af77999e-9cdb-40bb-8893-b4a83d032a70" (UID: "af77999e-9cdb-40bb-8893-b4a83d032a70"). InnerVolumeSpecName "kube-api-access-95pmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.877524 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af77999e-9cdb-40bb-8893-b4a83d032a70" (UID: "af77999e-9cdb-40bb-8893-b4a83d032a70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.893236 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95pmn\" (UniqueName: \"kubernetes.io/projected/af77999e-9cdb-40bb-8893-b4a83d032a70-kube-api-access-95pmn\") on node \"crc\" DevicePath \"\"" Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.893265 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:21:16 crc kubenswrapper[5016]: I1011 09:21:16.893273 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af77999e-9cdb-40bb-8893-b4a83d032a70-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.083936 5016 generic.go:334] "Generic (PLEG): container finished" podID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerID="8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be" exitCode=0 Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.083986 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7wgf" event={"ID":"af77999e-9cdb-40bb-8893-b4a83d032a70","Type":"ContainerDied","Data":"8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be"} Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.084036 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q7wgf" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.084064 5016 scope.go:117] "RemoveContainer" containerID="8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.084049 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q7wgf" event={"ID":"af77999e-9cdb-40bb-8893-b4a83d032a70","Type":"ContainerDied","Data":"d639f5c7fe72dd47dda2457bbfb8486e4333894be5e6991bf0c805c3dbdac26d"} Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.115875 5016 scope.go:117] "RemoveContainer" containerID="65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.152323 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q7wgf"] Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.162013 5016 scope.go:117] "RemoveContainer" containerID="e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.164212 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q7wgf"] Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.202581 5016 scope.go:117] "RemoveContainer" containerID="8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be" Oct 11 09:21:17 crc kubenswrapper[5016]: E1011 09:21:17.203216 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be\": container with ID starting with 8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be not found: ID does not exist" containerID="8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.203252 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be"} err="failed to get container status \"8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be\": rpc error: code = NotFound desc = could not find container \"8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be\": container with ID starting with 8af0aec0a8a354f8932cdbc6f115f1d7f6ac1d8c4a33ec59af18388edcd385be not found: ID does not exist" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.203276 5016 scope.go:117] "RemoveContainer" containerID="65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4" Oct 11 09:21:17 crc kubenswrapper[5016]: E1011 09:21:17.203574 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4\": container with ID starting with 65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4 not found: ID does not exist" containerID="65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.203598 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4"} err="failed to get container status \"65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4\": rpc error: code = NotFound desc = could not find container \"65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4\": container with ID starting with 65177bec87a672973fcb220300485846f65bc0f7b5695119126534baed34ddd4 not found: ID does not exist" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.203613 5016 scope.go:117] "RemoveContainer" containerID="e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc" Oct 11 09:21:17 crc kubenswrapper[5016]: E1011 09:21:17.204034 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc\": container with ID starting with e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc not found: ID does not exist" containerID="e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc" Oct 11 09:21:17 crc kubenswrapper[5016]: I1011 09:21:17.204146 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc"} err="failed to get container status \"e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc\": rpc error: code = NotFound desc = could not find container \"e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc\": container with ID starting with e7eaf37f83d6872861045106decd2a7f7d163182f198af1d0dec628cc7c21ccc not found: ID does not exist" Oct 11 09:21:19 crc kubenswrapper[5016]: I1011 09:21:19.147859 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af77999e-9cdb-40bb-8893-b4a83d032a70" path="/var/lib/kubelet/pods/af77999e-9cdb-40bb-8893-b4a83d032a70/volumes" Oct 11 09:22:07 crc kubenswrapper[5016]: I1011 09:22:07.122306 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:22:07 crc kubenswrapper[5016]: I1011 09:22:07.122989 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.680064 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-945nr"] Oct 11 09:22:09 crc kubenswrapper[5016]: E1011 09:22:09.680993 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerName="extract-utilities" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.681013 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerName="extract-utilities" Oct 11 09:22:09 crc kubenswrapper[5016]: E1011 09:22:09.681044 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerName="registry-server" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.681057 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerName="registry-server" Oct 11 09:22:09 crc kubenswrapper[5016]: E1011 09:22:09.681127 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerName="extract-content" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.681141 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerName="extract-content" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.681500 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="af77999e-9cdb-40bb-8893-b4a83d032a70" containerName="registry-server" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.683879 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.691546 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-945nr"] Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.739299 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ndl\" (UniqueName: \"kubernetes.io/projected/eca2f877-785d-4914-b0a4-57309685ce07-kube-api-access-d5ndl\") pod \"certified-operators-945nr\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.739741 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-catalog-content\") pod \"certified-operators-945nr\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.739775 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-utilities\") pod \"certified-operators-945nr\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.841552 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ndl\" (UniqueName: \"kubernetes.io/projected/eca2f877-785d-4914-b0a4-57309685ce07-kube-api-access-d5ndl\") pod \"certified-operators-945nr\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.841703 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-catalog-content\") pod \"certified-operators-945nr\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.841744 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-utilities\") pod \"certified-operators-945nr\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.842301 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-utilities\") pod \"certified-operators-945nr\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.843999 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-catalog-content\") pod \"certified-operators-945nr\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:09 crc kubenswrapper[5016]: I1011 09:22:09.873790 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ndl\" (UniqueName: \"kubernetes.io/projected/eca2f877-785d-4914-b0a4-57309685ce07-kube-api-access-d5ndl\") pod \"certified-operators-945nr\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:10 crc kubenswrapper[5016]: I1011 09:22:10.015250 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:10 crc kubenswrapper[5016]: I1011 09:22:10.507506 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-945nr"] Oct 11 09:22:10 crc kubenswrapper[5016]: I1011 09:22:10.649468 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945nr" event={"ID":"eca2f877-785d-4914-b0a4-57309685ce07","Type":"ContainerStarted","Data":"d9ba9a1fb9791de453bd09a174af9f13a936fd941f181aa5de3e8b0af55ff3b5"} Oct 11 09:22:11 crc kubenswrapper[5016]: I1011 09:22:11.664451 5016 generic.go:334] "Generic (PLEG): container finished" podID="eca2f877-785d-4914-b0a4-57309685ce07" containerID="a64158c8e0751372a18406c77d817b749a6b0e996ef5854d00f0a5d3aff761d8" exitCode=0 Oct 11 09:22:11 crc kubenswrapper[5016]: I1011 09:22:11.664607 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945nr" event={"ID":"eca2f877-785d-4914-b0a4-57309685ce07","Type":"ContainerDied","Data":"a64158c8e0751372a18406c77d817b749a6b0e996ef5854d00f0a5d3aff761d8"} Oct 11 09:22:12 crc kubenswrapper[5016]: I1011 09:22:12.678425 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945nr" event={"ID":"eca2f877-785d-4914-b0a4-57309685ce07","Type":"ContainerStarted","Data":"59054be0fa3e30ddd7c4452918a38fee5018d39bb25bbec242ea42b4e2828e7d"} Oct 11 09:22:14 crc kubenswrapper[5016]: I1011 09:22:14.702115 5016 generic.go:334] "Generic (PLEG): container finished" podID="eca2f877-785d-4914-b0a4-57309685ce07" containerID="59054be0fa3e30ddd7c4452918a38fee5018d39bb25bbec242ea42b4e2828e7d" exitCode=0 Oct 11 09:22:14 crc kubenswrapper[5016]: I1011 09:22:14.702426 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945nr" event={"ID":"eca2f877-785d-4914-b0a4-57309685ce07","Type":"ContainerDied","Data":"59054be0fa3e30ddd7c4452918a38fee5018d39bb25bbec242ea42b4e2828e7d"} Oct 11 09:22:16 crc kubenswrapper[5016]: I1011 09:22:16.726291 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945nr" event={"ID":"eca2f877-785d-4914-b0a4-57309685ce07","Type":"ContainerStarted","Data":"092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3"} Oct 11 09:22:16 crc kubenswrapper[5016]: I1011 09:22:16.757294 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-945nr" podStartSLOduration=3.751318467 podStartE2EDuration="7.757272782s" podCreationTimestamp="2025-10-11 09:22:09 +0000 UTC" firstStartedPulling="2025-10-11 09:22:11.668854744 +0000 UTC m=+6119.569310720" lastFinishedPulling="2025-10-11 09:22:15.674809049 +0000 UTC m=+6123.575265035" observedRunningTime="2025-10-11 09:22:16.748304665 +0000 UTC m=+6124.648760611" watchObservedRunningTime="2025-10-11 09:22:16.757272782 +0000 UTC m=+6124.657728728" Oct 11 09:22:20 crc kubenswrapper[5016]: I1011 09:22:20.016005 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:20 crc kubenswrapper[5016]: I1011 09:22:20.016920 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:20 crc kubenswrapper[5016]: I1011 09:22:20.106282 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:20 crc kubenswrapper[5016]: I1011 09:22:20.829959 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:20 crc kubenswrapper[5016]: I1011 09:22:20.910377 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-945nr"] Oct 11 09:22:22 crc kubenswrapper[5016]: I1011 09:22:22.798284 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-945nr" podUID="eca2f877-785d-4914-b0a4-57309685ce07" containerName="registry-server" containerID="cri-o://092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3" gracePeriod=2 Oct 11 09:22:28 crc kubenswrapper[5016]: I1011 09:22:28.909949 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Oct 11 09:22:28 crc kubenswrapper[5016]: I1011 09:22:28.910332 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="ae7b0f07-6360-46c1-8bc1-f89c5ac7a486" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Oct 11 09:22:30 crc kubenswrapper[5016]: E1011 09:22:30.016823 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3 is running failed: container process not found" containerID="092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:22:30 crc kubenswrapper[5016]: E1011 09:22:30.017604 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3 is running failed: container process not found" containerID="092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:22:30 crc kubenswrapper[5016]: E1011 09:22:30.018199 5016 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3 is running failed: container process not found" containerID="092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3" cmd=["grpc_health_probe","-addr=:50051"] Oct 11 09:22:30 crc kubenswrapper[5016]: E1011 09:22:30.018254 5016 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-945nr" podUID="eca2f877-785d-4914-b0a4-57309685ce07" containerName="registry-server" Oct 11 09:22:30 crc kubenswrapper[5016]: I1011 09:22:30.654035 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="14ae562e-2b57-478f-89cd-8330105eacdf" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.159:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 11 09:22:30 crc kubenswrapper[5016]: I1011 09:22:30.716612 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-945nr_eca2f877-785d-4914-b0a4-57309685ce07/registry-server/0.log" Oct 11 09:22:30 crc kubenswrapper[5016]: I1011 09:22:30.717894 5016 generic.go:334] "Generic (PLEG): container finished" podID="eca2f877-785d-4914-b0a4-57309685ce07" containerID="092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3" exitCode=137 Oct 11 09:22:30 crc kubenswrapper[5016]: I1011 09:22:30.717959 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945nr" event={"ID":"eca2f877-785d-4914-b0a4-57309685ce07","Type":"ContainerDied","Data":"092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3"} Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.018977 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-945nr_eca2f877-785d-4914-b0a4-57309685ce07/registry-server/0.log" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.019581 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.179357 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-utilities\") pod \"eca2f877-785d-4914-b0a4-57309685ce07\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.180061 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-catalog-content\") pod \"eca2f877-785d-4914-b0a4-57309685ce07\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.180241 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5ndl\" (UniqueName: \"kubernetes.io/projected/eca2f877-785d-4914-b0a4-57309685ce07-kube-api-access-d5ndl\") pod \"eca2f877-785d-4914-b0a4-57309685ce07\" (UID: \"eca2f877-785d-4914-b0a4-57309685ce07\") " Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.180371 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-utilities" (OuterVolumeSpecName: "utilities") pod "eca2f877-785d-4914-b0a4-57309685ce07" (UID: "eca2f877-785d-4914-b0a4-57309685ce07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.181040 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.186013 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca2f877-785d-4914-b0a4-57309685ce07-kube-api-access-d5ndl" (OuterVolumeSpecName: "kube-api-access-d5ndl") pod "eca2f877-785d-4914-b0a4-57309685ce07" (UID: "eca2f877-785d-4914-b0a4-57309685ce07"). InnerVolumeSpecName "kube-api-access-d5ndl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.283176 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5ndl\" (UniqueName: \"kubernetes.io/projected/eca2f877-785d-4914-b0a4-57309685ce07-kube-api-access-d5ndl\") on node \"crc\" DevicePath \"\"" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.527682 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eca2f877-785d-4914-b0a4-57309685ce07" (UID: "eca2f877-785d-4914-b0a4-57309685ce07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.588898 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eca2f877-785d-4914-b0a4-57309685ce07-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.730232 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-945nr_eca2f877-785d-4914-b0a4-57309685ce07/registry-server/0.log" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.731008 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-945nr" event={"ID":"eca2f877-785d-4914-b0a4-57309685ce07","Type":"ContainerDied","Data":"d9ba9a1fb9791de453bd09a174af9f13a936fd941f181aa5de3e8b0af55ff3b5"} Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.731055 5016 scope.go:117] "RemoveContainer" containerID="092e52b89cf8cceda2582c6e07150a89d2b6610989d9aa521ed18d7faf8eccf3" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.731108 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-945nr" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.776468 5016 scope.go:117] "RemoveContainer" containerID="59054be0fa3e30ddd7c4452918a38fee5018d39bb25bbec242ea42b4e2828e7d" Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.783712 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-945nr"] Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.790578 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-945nr"] Oct 11 09:22:31 crc kubenswrapper[5016]: I1011 09:22:31.806721 5016 scope.go:117] "RemoveContainer" containerID="a64158c8e0751372a18406c77d817b749a6b0e996ef5854d00f0a5d3aff761d8" Oct 11 09:22:33 crc kubenswrapper[5016]: I1011 09:22:33.143232 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca2f877-785d-4914-b0a4-57309685ce07" path="/var/lib/kubelet/pods/eca2f877-785d-4914-b0a4-57309685ce07/volumes" Oct 11 09:22:37 crc kubenswrapper[5016]: I1011 09:22:37.122830 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:22:37 crc kubenswrapper[5016]: I1011 09:22:37.123994 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:23:07 crc kubenswrapper[5016]: I1011 09:23:07.121995 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:23:07 crc kubenswrapper[5016]: I1011 09:23:07.122543 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:23:07 crc kubenswrapper[5016]: I1011 09:23:07.122593 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:23:07 crc kubenswrapper[5016]: I1011 09:23:07.123459 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c04380bd4ad79990959e8607bacf5330d9b873f45e2d0ac548f9d74bd869ba0"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:23:07 crc kubenswrapper[5016]: I1011 09:23:07.123530 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://1c04380bd4ad79990959e8607bacf5330d9b873f45e2d0ac548f9d74bd869ba0" gracePeriod=600 Oct 11 09:23:08 crc kubenswrapper[5016]: I1011 09:23:08.106010 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="1c04380bd4ad79990959e8607bacf5330d9b873f45e2d0ac548f9d74bd869ba0" exitCode=0 Oct 11 09:23:08 crc kubenswrapper[5016]: I1011 09:23:08.106089 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"1c04380bd4ad79990959e8607bacf5330d9b873f45e2d0ac548f9d74bd869ba0"} Oct 11 09:23:08 crc kubenswrapper[5016]: I1011 09:23:08.106597 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba"} Oct 11 09:23:08 crc kubenswrapper[5016]: I1011 09:23:08.106619 5016 scope.go:117] "RemoveContainer" containerID="18beaebbfa4c7f4192317c8b5c75e52dfcdcff8655c587752c8db882e1a5b5fc" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.206551 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jh7gm"] Oct 11 09:23:31 crc kubenswrapper[5016]: E1011 09:23:31.207620 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca2f877-785d-4914-b0a4-57309685ce07" containerName="registry-server" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.207633 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca2f877-785d-4914-b0a4-57309685ce07" containerName="registry-server" Oct 11 09:23:31 crc kubenswrapper[5016]: E1011 09:23:31.207646 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca2f877-785d-4914-b0a4-57309685ce07" containerName="extract-content" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.207667 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca2f877-785d-4914-b0a4-57309685ce07" containerName="extract-content" Oct 11 09:23:31 crc kubenswrapper[5016]: E1011 09:23:31.207701 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca2f877-785d-4914-b0a4-57309685ce07" containerName="extract-utilities" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.207709 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca2f877-785d-4914-b0a4-57309685ce07" containerName="extract-utilities" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.207896 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca2f877-785d-4914-b0a4-57309685ce07" containerName="registry-server" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.209335 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.223728 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jh7gm"] Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.276557 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4f9z\" (UniqueName: \"kubernetes.io/projected/d0736710-803e-4532-baf7-af73d4b5b0a0-kube-api-access-h4f9z\") pod \"redhat-operators-jh7gm\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.276603 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-utilities\") pod \"redhat-operators-jh7gm\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.277107 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-catalog-content\") pod \"redhat-operators-jh7gm\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.379374 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-utilities\") pod \"redhat-operators-jh7gm\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.379419 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4f9z\" (UniqueName: \"kubernetes.io/projected/d0736710-803e-4532-baf7-af73d4b5b0a0-kube-api-access-h4f9z\") pod \"redhat-operators-jh7gm\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.379495 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-catalog-content\") pod \"redhat-operators-jh7gm\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.380213 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-catalog-content\") pod \"redhat-operators-jh7gm\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.380235 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-utilities\") pod \"redhat-operators-jh7gm\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.422374 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4f9z\" (UniqueName: \"kubernetes.io/projected/d0736710-803e-4532-baf7-af73d4b5b0a0-kube-api-access-h4f9z\") pod \"redhat-operators-jh7gm\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:31 crc kubenswrapper[5016]: I1011 09:23:31.533407 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:32 crc kubenswrapper[5016]: I1011 09:23:32.030774 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jh7gm"] Oct 11 09:23:32 crc kubenswrapper[5016]: I1011 09:23:32.437040 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh7gm" event={"ID":"d0736710-803e-4532-baf7-af73d4b5b0a0","Type":"ContainerStarted","Data":"0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587"} Oct 11 09:23:32 crc kubenswrapper[5016]: I1011 09:23:32.437565 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh7gm" event={"ID":"d0736710-803e-4532-baf7-af73d4b5b0a0","Type":"ContainerStarted","Data":"8359b252e90a4f43d76b813ce959edf5bdf3bac131493c25912ad765e060cca9"} Oct 11 09:23:33 crc kubenswrapper[5016]: I1011 09:23:33.456858 5016 generic.go:334] "Generic (PLEG): container finished" podID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerID="0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587" exitCode=0 Oct 11 09:23:33 crc kubenswrapper[5016]: I1011 09:23:33.457053 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh7gm" event={"ID":"d0736710-803e-4532-baf7-af73d4b5b0a0","Type":"ContainerDied","Data":"0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587"} Oct 11 09:23:35 crc kubenswrapper[5016]: I1011 09:23:35.478577 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh7gm" event={"ID":"d0736710-803e-4532-baf7-af73d4b5b0a0","Type":"ContainerStarted","Data":"c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128"} Oct 11 09:23:40 crc kubenswrapper[5016]: I1011 09:23:40.521393 5016 generic.go:334] "Generic (PLEG): container finished" podID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerID="c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128" exitCode=0 Oct 11 09:23:40 crc kubenswrapper[5016]: I1011 09:23:40.521457 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh7gm" event={"ID":"d0736710-803e-4532-baf7-af73d4b5b0a0","Type":"ContainerDied","Data":"c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128"} Oct 11 09:23:41 crc kubenswrapper[5016]: I1011 09:23:41.534148 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh7gm" event={"ID":"d0736710-803e-4532-baf7-af73d4b5b0a0","Type":"ContainerStarted","Data":"ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed"} Oct 11 09:23:41 crc kubenswrapper[5016]: I1011 09:23:41.557580 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jh7gm" podStartSLOduration=3.103226599 podStartE2EDuration="10.557557244s" podCreationTimestamp="2025-10-11 09:23:31 +0000 UTC" firstStartedPulling="2025-10-11 09:23:33.463824973 +0000 UTC m=+6201.364280959" lastFinishedPulling="2025-10-11 09:23:40.918155658 +0000 UTC m=+6208.818611604" observedRunningTime="2025-10-11 09:23:41.551116544 +0000 UTC m=+6209.451572510" watchObservedRunningTime="2025-10-11 09:23:41.557557244 +0000 UTC m=+6209.458013190" Oct 11 09:23:51 crc kubenswrapper[5016]: I1011 09:23:51.533624 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:51 crc kubenswrapper[5016]: I1011 09:23:51.534486 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:23:52 crc kubenswrapper[5016]: I1011 09:23:52.595080 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jh7gm" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerName="registry-server" probeResult="failure" output=< Oct 11 09:23:52 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 09:23:52 crc kubenswrapper[5016]: > Oct 11 09:24:01 crc kubenswrapper[5016]: I1011 09:24:01.610528 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:24:01 crc kubenswrapper[5016]: I1011 09:24:01.669483 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:24:02 crc kubenswrapper[5016]: I1011 09:24:02.398111 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jh7gm"] Oct 11 09:24:02 crc kubenswrapper[5016]: I1011 09:24:02.725869 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jh7gm" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerName="registry-server" containerID="cri-o://ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed" gracePeriod=2 Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.398818 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.500956 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-utilities\") pod \"d0736710-803e-4532-baf7-af73d4b5b0a0\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.500993 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-catalog-content\") pod \"d0736710-803e-4532-baf7-af73d4b5b0a0\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.501080 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4f9z\" (UniqueName: \"kubernetes.io/projected/d0736710-803e-4532-baf7-af73d4b5b0a0-kube-api-access-h4f9z\") pod \"d0736710-803e-4532-baf7-af73d4b5b0a0\" (UID: \"d0736710-803e-4532-baf7-af73d4b5b0a0\") " Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.502399 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-utilities" (OuterVolumeSpecName: "utilities") pod "d0736710-803e-4532-baf7-af73d4b5b0a0" (UID: "d0736710-803e-4532-baf7-af73d4b5b0a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.508305 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0736710-803e-4532-baf7-af73d4b5b0a0-kube-api-access-h4f9z" (OuterVolumeSpecName: "kube-api-access-h4f9z") pod "d0736710-803e-4532-baf7-af73d4b5b0a0" (UID: "d0736710-803e-4532-baf7-af73d4b5b0a0"). InnerVolumeSpecName "kube-api-access-h4f9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.594519 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0736710-803e-4532-baf7-af73d4b5b0a0" (UID: "d0736710-803e-4532-baf7-af73d4b5b0a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.603334 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.603379 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0736710-803e-4532-baf7-af73d4b5b0a0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.603397 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4f9z\" (UniqueName: \"kubernetes.io/projected/d0736710-803e-4532-baf7-af73d4b5b0a0-kube-api-access-h4f9z\") on node \"crc\" DevicePath \"\"" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.740933 5016 generic.go:334] "Generic (PLEG): container finished" podID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerID="ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed" exitCode=0 Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.740994 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh7gm" event={"ID":"d0736710-803e-4532-baf7-af73d4b5b0a0","Type":"ContainerDied","Data":"ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed"} Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.741037 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jh7gm" event={"ID":"d0736710-803e-4532-baf7-af73d4b5b0a0","Type":"ContainerDied","Data":"8359b252e90a4f43d76b813ce959edf5bdf3bac131493c25912ad765e060cca9"} Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.741062 5016 scope.go:117] "RemoveContainer" containerID="ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.741064 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jh7gm" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.774967 5016 scope.go:117] "RemoveContainer" containerID="c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.792015 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jh7gm"] Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.801859 5016 scope.go:117] "RemoveContainer" containerID="0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.817641 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jh7gm"] Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.855341 5016 scope.go:117] "RemoveContainer" containerID="ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed" Oct 11 09:24:03 crc kubenswrapper[5016]: E1011 09:24:03.855831 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed\": container with ID starting with ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed not found: ID does not exist" containerID="ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.855866 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed"} err="failed to get container status \"ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed\": rpc error: code = NotFound desc = could not find container \"ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed\": container with ID starting with ffc1d2e68d55e6ff39f9e2200cd206a4651683900dd9a29805b029810aa586ed not found: ID does not exist" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.855892 5016 scope.go:117] "RemoveContainer" containerID="c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128" Oct 11 09:24:03 crc kubenswrapper[5016]: E1011 09:24:03.856884 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128\": container with ID starting with c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128 not found: ID does not exist" containerID="c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.856941 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128"} err="failed to get container status \"c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128\": rpc error: code = NotFound desc = could not find container \"c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128\": container with ID starting with c5deebd7e498fccd478ec77d6ac22b5f70af0a33935683862131b76eb9eb8128 not found: ID does not exist" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.856975 5016 scope.go:117] "RemoveContainer" containerID="0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587" Oct 11 09:24:03 crc kubenswrapper[5016]: E1011 09:24:03.860899 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587\": container with ID starting with 0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587 not found: ID does not exist" containerID="0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587" Oct 11 09:24:03 crc kubenswrapper[5016]: I1011 09:24:03.860983 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587"} err="failed to get container status \"0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587\": rpc error: code = NotFound desc = could not find container \"0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587\": container with ID starting with 0509932c01ad5a98aeb9a5be4e7e595a3d9f2e21203c2813863446f9342eb587 not found: ID does not exist" Oct 11 09:24:05 crc kubenswrapper[5016]: I1011 09:24:05.146430 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" path="/var/lib/kubelet/pods/d0736710-803e-4532-baf7-af73d4b5b0a0/volumes" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.523947 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8xm6g"] Oct 11 09:24:50 crc kubenswrapper[5016]: E1011 09:24:50.525282 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerName="registry-server" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.525304 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerName="registry-server" Oct 11 09:24:50 crc kubenswrapper[5016]: E1011 09:24:50.525338 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerName="extract-content" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.525347 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerName="extract-content" Oct 11 09:24:50 crc kubenswrapper[5016]: E1011 09:24:50.525400 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerName="extract-utilities" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.525409 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerName="extract-utilities" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.525847 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0736710-803e-4532-baf7-af73d4b5b0a0" containerName="registry-server" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.528245 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.549435 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xm6g"] Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.677122 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-utilities\") pod \"redhat-marketplace-8xm6g\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.677318 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-catalog-content\") pod \"redhat-marketplace-8xm6g\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.677771 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24xqc\" (UniqueName: \"kubernetes.io/projected/5fe11347-834a-47a5-9511-0dbc90268a3c-kube-api-access-24xqc\") pod \"redhat-marketplace-8xm6g\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.779589 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-utilities\") pod \"redhat-marketplace-8xm6g\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.779697 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-catalog-content\") pod \"redhat-marketplace-8xm6g\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.779803 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24xqc\" (UniqueName: \"kubernetes.io/projected/5fe11347-834a-47a5-9511-0dbc90268a3c-kube-api-access-24xqc\") pod \"redhat-marketplace-8xm6g\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.780099 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-utilities\") pod \"redhat-marketplace-8xm6g\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.780146 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-catalog-content\") pod \"redhat-marketplace-8xm6g\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.799511 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24xqc\" (UniqueName: \"kubernetes.io/projected/5fe11347-834a-47a5-9511-0dbc90268a3c-kube-api-access-24xqc\") pod \"redhat-marketplace-8xm6g\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:50 crc kubenswrapper[5016]: I1011 09:24:50.853365 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:24:51 crc kubenswrapper[5016]: I1011 09:24:51.349225 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xm6g"] Oct 11 09:24:52 crc kubenswrapper[5016]: I1011 09:24:52.321475 5016 generic.go:334] "Generic (PLEG): container finished" podID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerID="888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9" exitCode=0 Oct 11 09:24:52 crc kubenswrapper[5016]: I1011 09:24:52.321562 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xm6g" event={"ID":"5fe11347-834a-47a5-9511-0dbc90268a3c","Type":"ContainerDied","Data":"888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9"} Oct 11 09:24:52 crc kubenswrapper[5016]: I1011 09:24:52.322080 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xm6g" event={"ID":"5fe11347-834a-47a5-9511-0dbc90268a3c","Type":"ContainerStarted","Data":"9145f6d39c3bf285b1988ae1aa5bf25047f7d5c72f46f143e1c957b7c8641bfb"} Oct 11 09:24:53 crc kubenswrapper[5016]: I1011 09:24:53.333289 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xm6g" event={"ID":"5fe11347-834a-47a5-9511-0dbc90268a3c","Type":"ContainerStarted","Data":"2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6"} Oct 11 09:24:54 crc kubenswrapper[5016]: I1011 09:24:54.343096 5016 generic.go:334] "Generic (PLEG): container finished" podID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerID="2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6" exitCode=0 Oct 11 09:24:54 crc kubenswrapper[5016]: I1011 09:24:54.343319 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xm6g" event={"ID":"5fe11347-834a-47a5-9511-0dbc90268a3c","Type":"ContainerDied","Data":"2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6"} Oct 11 09:24:55 crc kubenswrapper[5016]: I1011 09:24:55.355223 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xm6g" event={"ID":"5fe11347-834a-47a5-9511-0dbc90268a3c","Type":"ContainerStarted","Data":"b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828"} Oct 11 09:24:55 crc kubenswrapper[5016]: I1011 09:24:55.382488 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8xm6g" podStartSLOduration=2.992298343 podStartE2EDuration="5.382459671s" podCreationTimestamp="2025-10-11 09:24:50 +0000 UTC" firstStartedPulling="2025-10-11 09:24:52.324391084 +0000 UTC m=+6280.224847030" lastFinishedPulling="2025-10-11 09:24:54.714552412 +0000 UTC m=+6282.615008358" observedRunningTime="2025-10-11 09:24:55.376405681 +0000 UTC m=+6283.276861647" watchObservedRunningTime="2025-10-11 09:24:55.382459671 +0000 UTC m=+6283.282915627" Oct 11 09:25:00 crc kubenswrapper[5016]: I1011 09:25:00.854324 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:25:00 crc kubenswrapper[5016]: I1011 09:25:00.855189 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:25:00 crc kubenswrapper[5016]: I1011 09:25:00.927928 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:25:01 crc kubenswrapper[5016]: I1011 09:25:01.494630 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:25:01 crc kubenswrapper[5016]: I1011 09:25:01.562078 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xm6g"] Oct 11 09:25:03 crc kubenswrapper[5016]: I1011 09:25:03.463363 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8xm6g" podUID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerName="registry-server" containerID="cri-o://b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828" gracePeriod=2 Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.087160 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.254165 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-utilities\") pod \"5fe11347-834a-47a5-9511-0dbc90268a3c\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.255116 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24xqc\" (UniqueName: \"kubernetes.io/projected/5fe11347-834a-47a5-9511-0dbc90268a3c-kube-api-access-24xqc\") pod \"5fe11347-834a-47a5-9511-0dbc90268a3c\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.255191 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-utilities" (OuterVolumeSpecName: "utilities") pod "5fe11347-834a-47a5-9511-0dbc90268a3c" (UID: "5fe11347-834a-47a5-9511-0dbc90268a3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.256454 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-catalog-content\") pod \"5fe11347-834a-47a5-9511-0dbc90268a3c\" (UID: \"5fe11347-834a-47a5-9511-0dbc90268a3c\") " Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.258614 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.261470 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe11347-834a-47a5-9511-0dbc90268a3c-kube-api-access-24xqc" (OuterVolumeSpecName: "kube-api-access-24xqc") pod "5fe11347-834a-47a5-9511-0dbc90268a3c" (UID: "5fe11347-834a-47a5-9511-0dbc90268a3c"). InnerVolumeSpecName "kube-api-access-24xqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.288485 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fe11347-834a-47a5-9511-0dbc90268a3c" (UID: "5fe11347-834a-47a5-9511-0dbc90268a3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.359728 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24xqc\" (UniqueName: \"kubernetes.io/projected/5fe11347-834a-47a5-9511-0dbc90268a3c-kube-api-access-24xqc\") on node \"crc\" DevicePath \"\"" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.359764 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe11347-834a-47a5-9511-0dbc90268a3c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.480019 5016 generic.go:334] "Generic (PLEG): container finished" podID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerID="b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828" exitCode=0 Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.480064 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xm6g" event={"ID":"5fe11347-834a-47a5-9511-0dbc90268a3c","Type":"ContainerDied","Data":"b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828"} Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.480092 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xm6g" event={"ID":"5fe11347-834a-47a5-9511-0dbc90268a3c","Type":"ContainerDied","Data":"9145f6d39c3bf285b1988ae1aa5bf25047f7d5c72f46f143e1c957b7c8641bfb"} Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.480109 5016 scope.go:117] "RemoveContainer" containerID="b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.480251 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xm6g" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.520881 5016 scope.go:117] "RemoveContainer" containerID="2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.527483 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xm6g"] Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.538095 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xm6g"] Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.545460 5016 scope.go:117] "RemoveContainer" containerID="888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.589944 5016 scope.go:117] "RemoveContainer" containerID="b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828" Oct 11 09:25:04 crc kubenswrapper[5016]: E1011 09:25:04.591757 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828\": container with ID starting with b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828 not found: ID does not exist" containerID="b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.591851 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828"} err="failed to get container status \"b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828\": rpc error: code = NotFound desc = could not find container \"b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828\": container with ID starting with b0c4670d896d258eda50fecfbfcb4177c4722aed4f0f12453641431e93cc7828 not found: ID does not exist" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.591906 5016 scope.go:117] "RemoveContainer" containerID="2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6" Oct 11 09:25:04 crc kubenswrapper[5016]: E1011 09:25:04.592431 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6\": container with ID starting with 2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6 not found: ID does not exist" containerID="2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.592518 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6"} err="failed to get container status \"2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6\": rpc error: code = NotFound desc = could not find container \"2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6\": container with ID starting with 2e965a33e1fe43926223daaf8e8d77c3269f28e63f7f3e52acf1f0609c78e6b6 not found: ID does not exist" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.592562 5016 scope.go:117] "RemoveContainer" containerID="888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9" Oct 11 09:25:04 crc kubenswrapper[5016]: E1011 09:25:04.593221 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9\": container with ID starting with 888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9 not found: ID does not exist" containerID="888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9" Oct 11 09:25:04 crc kubenswrapper[5016]: I1011 09:25:04.593252 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9"} err="failed to get container status \"888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9\": rpc error: code = NotFound desc = could not find container \"888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9\": container with ID starting with 888ce124624d75f2641abdc4efe0b70da2cd0f2c0501976318ec983971c310e9 not found: ID does not exist" Oct 11 09:25:05 crc kubenswrapper[5016]: I1011 09:25:05.147846 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe11347-834a-47a5-9511-0dbc90268a3c" path="/var/lib/kubelet/pods/5fe11347-834a-47a5-9511-0dbc90268a3c/volumes" Oct 11 09:25:07 crc kubenswrapper[5016]: I1011 09:25:07.122292 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:25:07 crc kubenswrapper[5016]: I1011 09:25:07.123005 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:25:37 crc kubenswrapper[5016]: I1011 09:25:37.121702 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:25:37 crc kubenswrapper[5016]: I1011 09:25:37.122271 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:26:07 crc kubenswrapper[5016]: I1011 09:26:07.122942 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:26:07 crc kubenswrapper[5016]: I1011 09:26:07.123404 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:26:07 crc kubenswrapper[5016]: I1011 09:26:07.123452 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:26:07 crc kubenswrapper[5016]: I1011 09:26:07.124278 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:26:07 crc kubenswrapper[5016]: I1011 09:26:07.124346 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" gracePeriod=600 Oct 11 09:26:07 crc kubenswrapper[5016]: E1011 09:26:07.286038 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:26:08 crc kubenswrapper[5016]: I1011 09:26:08.198365 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" exitCode=0 Oct 11 09:26:08 crc kubenswrapper[5016]: I1011 09:26:08.198470 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba"} Oct 11 09:26:08 crc kubenswrapper[5016]: I1011 09:26:08.198736 5016 scope.go:117] "RemoveContainer" containerID="1c04380bd4ad79990959e8607bacf5330d9b873f45e2d0ac548f9d74bd869ba0" Oct 11 09:26:08 crc kubenswrapper[5016]: I1011 09:26:08.199993 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:26:08 crc kubenswrapper[5016]: E1011 09:26:08.200470 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:26:21 crc kubenswrapper[5016]: I1011 09:26:21.134704 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:26:21 crc kubenswrapper[5016]: E1011 09:26:21.136131 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:26:35 crc kubenswrapper[5016]: I1011 09:26:35.136346 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:26:35 crc kubenswrapper[5016]: E1011 09:26:35.137972 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:26:50 crc kubenswrapper[5016]: I1011 09:26:50.133350 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:26:50 crc kubenswrapper[5016]: E1011 09:26:50.134501 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:27:01 crc kubenswrapper[5016]: I1011 09:27:01.133425 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:27:01 crc kubenswrapper[5016]: E1011 09:27:01.134591 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:27:14 crc kubenswrapper[5016]: I1011 09:27:14.133914 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:27:14 crc kubenswrapper[5016]: E1011 09:27:14.134683 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:27:25 crc kubenswrapper[5016]: I1011 09:27:25.133611 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:27:25 crc kubenswrapper[5016]: E1011 09:27:25.135262 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:27:40 crc kubenswrapper[5016]: I1011 09:27:40.133202 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:27:40 crc kubenswrapper[5016]: E1011 09:27:40.134253 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:27:53 crc kubenswrapper[5016]: I1011 09:27:53.134501 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:27:53 crc kubenswrapper[5016]: E1011 09:27:53.135563 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:28:07 crc kubenswrapper[5016]: I1011 09:28:07.133508 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:28:07 crc kubenswrapper[5016]: E1011 09:28:07.134790 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:28:19 crc kubenswrapper[5016]: I1011 09:28:19.134183 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:28:19 crc kubenswrapper[5016]: E1011 09:28:19.135476 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:28:30 crc kubenswrapper[5016]: I1011 09:28:30.134370 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:28:30 crc kubenswrapper[5016]: E1011 09:28:30.135378 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:28:43 crc kubenswrapper[5016]: I1011 09:28:43.147004 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:28:43 crc kubenswrapper[5016]: E1011 09:28:43.149024 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:28:57 crc kubenswrapper[5016]: I1011 09:28:57.134518 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:28:57 crc kubenswrapper[5016]: E1011 09:28:57.136043 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:29:08 crc kubenswrapper[5016]: I1011 09:29:08.133344 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:29:08 crc kubenswrapper[5016]: E1011 09:29:08.134231 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:29:21 crc kubenswrapper[5016]: I1011 09:29:21.134052 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:29:21 crc kubenswrapper[5016]: E1011 09:29:21.134866 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:29:33 crc kubenswrapper[5016]: I1011 09:29:33.160232 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:29:33 crc kubenswrapper[5016]: E1011 09:29:33.162419 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:29:47 crc kubenswrapper[5016]: I1011 09:29:47.133765 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:29:47 crc kubenswrapper[5016]: E1011 09:29:47.136006 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:29:59 crc kubenswrapper[5016]: I1011 09:29:59.133806 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:29:59 crc kubenswrapper[5016]: E1011 09:29:59.134573 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.209278 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852"] Oct 11 09:30:00 crc kubenswrapper[5016]: E1011 09:30:00.209989 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerName="registry-server" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.210001 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerName="registry-server" Oct 11 09:30:00 crc kubenswrapper[5016]: E1011 09:30:00.210027 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerName="extract-content" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.210033 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerName="extract-content" Oct 11 09:30:00 crc kubenswrapper[5016]: E1011 09:30:00.210054 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerName="extract-utilities" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.210061 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerName="extract-utilities" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.210228 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe11347-834a-47a5-9511-0dbc90268a3c" containerName="registry-server" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.210861 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.213125 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.214357 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.222335 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852"] Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.279211 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579f286a-81ab-48a0-97f0-49602ad68d3e-config-volume\") pod \"collect-profiles-29336250-ws852\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.279325 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579f286a-81ab-48a0-97f0-49602ad68d3e-secret-volume\") pod \"collect-profiles-29336250-ws852\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.279518 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk9t2\" (UniqueName: \"kubernetes.io/projected/579f286a-81ab-48a0-97f0-49602ad68d3e-kube-api-access-fk9t2\") pod \"collect-profiles-29336250-ws852\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.381352 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579f286a-81ab-48a0-97f0-49602ad68d3e-config-volume\") pod \"collect-profiles-29336250-ws852\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.381446 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579f286a-81ab-48a0-97f0-49602ad68d3e-secret-volume\") pod \"collect-profiles-29336250-ws852\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.381576 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk9t2\" (UniqueName: \"kubernetes.io/projected/579f286a-81ab-48a0-97f0-49602ad68d3e-kube-api-access-fk9t2\") pod \"collect-profiles-29336250-ws852\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.382556 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579f286a-81ab-48a0-97f0-49602ad68d3e-config-volume\") pod \"collect-profiles-29336250-ws852\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.387426 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579f286a-81ab-48a0-97f0-49602ad68d3e-secret-volume\") pod \"collect-profiles-29336250-ws852\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.400130 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk9t2\" (UniqueName: \"kubernetes.io/projected/579f286a-81ab-48a0-97f0-49602ad68d3e-kube-api-access-fk9t2\") pod \"collect-profiles-29336250-ws852\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:00 crc kubenswrapper[5016]: I1011 09:30:00.552119 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:01 crc kubenswrapper[5016]: I1011 09:30:01.131509 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852"] Oct 11 09:30:01 crc kubenswrapper[5016]: I1011 09:30:01.836443 5016 generic.go:334] "Generic (PLEG): container finished" podID="579f286a-81ab-48a0-97f0-49602ad68d3e" containerID="ead501218da37a38110574b63f473f3be76915f4edf2eb9b1bdc2b7f27c6b0cd" exitCode=0 Oct 11 09:30:01 crc kubenswrapper[5016]: I1011 09:30:01.836516 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" event={"ID":"579f286a-81ab-48a0-97f0-49602ad68d3e","Type":"ContainerDied","Data":"ead501218da37a38110574b63f473f3be76915f4edf2eb9b1bdc2b7f27c6b0cd"} Oct 11 09:30:01 crc kubenswrapper[5016]: I1011 09:30:01.836778 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" event={"ID":"579f286a-81ab-48a0-97f0-49602ad68d3e","Type":"ContainerStarted","Data":"b893a023f172f0755747e31a50ac97887500549ee6b2726cf4a8eb1b945df39f"} Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.303959 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.449797 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk9t2\" (UniqueName: \"kubernetes.io/projected/579f286a-81ab-48a0-97f0-49602ad68d3e-kube-api-access-fk9t2\") pod \"579f286a-81ab-48a0-97f0-49602ad68d3e\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.450231 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579f286a-81ab-48a0-97f0-49602ad68d3e-secret-volume\") pod \"579f286a-81ab-48a0-97f0-49602ad68d3e\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.450334 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579f286a-81ab-48a0-97f0-49602ad68d3e-config-volume\") pod \"579f286a-81ab-48a0-97f0-49602ad68d3e\" (UID: \"579f286a-81ab-48a0-97f0-49602ad68d3e\") " Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.451669 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/579f286a-81ab-48a0-97f0-49602ad68d3e-config-volume" (OuterVolumeSpecName: "config-volume") pod "579f286a-81ab-48a0-97f0-49602ad68d3e" (UID: "579f286a-81ab-48a0-97f0-49602ad68d3e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.457514 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579f286a-81ab-48a0-97f0-49602ad68d3e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "579f286a-81ab-48a0-97f0-49602ad68d3e" (UID: "579f286a-81ab-48a0-97f0-49602ad68d3e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.463570 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579f286a-81ab-48a0-97f0-49602ad68d3e-kube-api-access-fk9t2" (OuterVolumeSpecName: "kube-api-access-fk9t2") pod "579f286a-81ab-48a0-97f0-49602ad68d3e" (UID: "579f286a-81ab-48a0-97f0-49602ad68d3e"). InnerVolumeSpecName "kube-api-access-fk9t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.552550 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/579f286a-81ab-48a0-97f0-49602ad68d3e-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.552840 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk9t2\" (UniqueName: \"kubernetes.io/projected/579f286a-81ab-48a0-97f0-49602ad68d3e-kube-api-access-fk9t2\") on node \"crc\" DevicePath \"\"" Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.552908 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/579f286a-81ab-48a0-97f0-49602ad68d3e-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.855568 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" event={"ID":"579f286a-81ab-48a0-97f0-49602ad68d3e","Type":"ContainerDied","Data":"b893a023f172f0755747e31a50ac97887500549ee6b2726cf4a8eb1b945df39f"} Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.855616 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b893a023f172f0755747e31a50ac97887500549ee6b2726cf4a8eb1b945df39f" Oct 11 09:30:03 crc kubenswrapper[5016]: I1011 09:30:03.855962 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336250-ws852" Oct 11 09:30:04 crc kubenswrapper[5016]: I1011 09:30:04.392528 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp"] Oct 11 09:30:04 crc kubenswrapper[5016]: I1011 09:30:04.403175 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336205-28hjp"] Oct 11 09:30:05 crc kubenswrapper[5016]: I1011 09:30:05.145454 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6" path="/var/lib/kubelet/pods/97b82c46-e3a3-4ed4-84ab-c9ce1f6be8e6/volumes" Oct 11 09:30:10 crc kubenswrapper[5016]: I1011 09:30:10.147707 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:30:10 crc kubenswrapper[5016]: E1011 09:30:10.148287 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:30:25 crc kubenswrapper[5016]: I1011 09:30:25.134746 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:30:25 crc kubenswrapper[5016]: E1011 09:30:25.136012 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:30:39 crc kubenswrapper[5016]: I1011 09:30:39.133680 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:30:39 crc kubenswrapper[5016]: E1011 09:30:39.134758 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:30:44 crc kubenswrapper[5016]: I1011 09:30:44.047583 5016 scope.go:117] "RemoveContainer" containerID="74324286390ebf253f90849a95ec729f86ccae930142f1f62b6d9593cc5d5ab7" Oct 11 09:30:53 crc kubenswrapper[5016]: I1011 09:30:53.134529 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:30:53 crc kubenswrapper[5016]: E1011 09:30:53.138384 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:31:08 crc kubenswrapper[5016]: I1011 09:31:08.133997 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:31:08 crc kubenswrapper[5016]: I1011 09:31:08.705209 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"05b2883651c982a6a04c187acf3e457a66e58f38612201a9ddc672a141007ce1"} Oct 11 09:32:11 crc kubenswrapper[5016]: I1011 09:32:11.885448 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bq4kl"] Oct 11 09:32:11 crc kubenswrapper[5016]: E1011 09:32:11.886781 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579f286a-81ab-48a0-97f0-49602ad68d3e" containerName="collect-profiles" Oct 11 09:32:11 crc kubenswrapper[5016]: I1011 09:32:11.886796 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="579f286a-81ab-48a0-97f0-49602ad68d3e" containerName="collect-profiles" Oct 11 09:32:11 crc kubenswrapper[5016]: I1011 09:32:11.887056 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="579f286a-81ab-48a0-97f0-49602ad68d3e" containerName="collect-profiles" Oct 11 09:32:11 crc kubenswrapper[5016]: I1011 09:32:11.888896 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:11 crc kubenswrapper[5016]: I1011 09:32:11.901961 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq4kl"] Oct 11 09:32:11 crc kubenswrapper[5016]: I1011 09:32:11.930693 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-utilities\") pod \"community-operators-bq4kl\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:11 crc kubenswrapper[5016]: I1011 09:32:11.930779 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-catalog-content\") pod \"community-operators-bq4kl\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:11 crc kubenswrapper[5016]: I1011 09:32:11.930880 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csbhl\" (UniqueName: \"kubernetes.io/projected/4c1a9ace-fed7-4940-89b5-96ae499ed050-kube-api-access-csbhl\") pod \"community-operators-bq4kl\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:12 crc kubenswrapper[5016]: I1011 09:32:12.033861 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-utilities\") pod \"community-operators-bq4kl\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:12 crc kubenswrapper[5016]: I1011 09:32:12.033921 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-catalog-content\") pod \"community-operators-bq4kl\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:12 crc kubenswrapper[5016]: I1011 09:32:12.033984 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csbhl\" (UniqueName: \"kubernetes.io/projected/4c1a9ace-fed7-4940-89b5-96ae499ed050-kube-api-access-csbhl\") pod \"community-operators-bq4kl\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:12 crc kubenswrapper[5016]: I1011 09:32:12.034390 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-utilities\") pod \"community-operators-bq4kl\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:12 crc kubenswrapper[5016]: I1011 09:32:12.034533 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-catalog-content\") pod \"community-operators-bq4kl\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:12 crc kubenswrapper[5016]: I1011 09:32:12.056258 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csbhl\" (UniqueName: \"kubernetes.io/projected/4c1a9ace-fed7-4940-89b5-96ae499ed050-kube-api-access-csbhl\") pod \"community-operators-bq4kl\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:12 crc kubenswrapper[5016]: I1011 09:32:12.219647 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:12 crc kubenswrapper[5016]: I1011 09:32:12.784059 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq4kl"] Oct 11 09:32:13 crc kubenswrapper[5016]: I1011 09:32:13.409001 5016 generic.go:334] "Generic (PLEG): container finished" podID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerID="3c15b1ba225f8b180b7836258bd66b69bebe57cf2e2334de76888b066066727c" exitCode=0 Oct 11 09:32:13 crc kubenswrapper[5016]: I1011 09:32:13.409070 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq4kl" event={"ID":"4c1a9ace-fed7-4940-89b5-96ae499ed050","Type":"ContainerDied","Data":"3c15b1ba225f8b180b7836258bd66b69bebe57cf2e2334de76888b066066727c"} Oct 11 09:32:13 crc kubenswrapper[5016]: I1011 09:32:13.409709 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq4kl" event={"ID":"4c1a9ace-fed7-4940-89b5-96ae499ed050","Type":"ContainerStarted","Data":"8c851b8fab58c40aaffbd74f73ffa1bf6e499db15c904112be8f494a1992c93f"} Oct 11 09:32:13 crc kubenswrapper[5016]: I1011 09:32:13.412369 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 09:32:15 crc kubenswrapper[5016]: I1011 09:32:15.443967 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq4kl" event={"ID":"4c1a9ace-fed7-4940-89b5-96ae499ed050","Type":"ContainerStarted","Data":"0a8a0ca24d50a081d2354c146b0d81916984f6b4d1cdd48d3b000c82e633e692"} Oct 11 09:32:15 crc kubenswrapper[5016]: I1011 09:32:15.856646 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bbnrb"] Oct 11 09:32:15 crc kubenswrapper[5016]: I1011 09:32:15.868359 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:15 crc kubenswrapper[5016]: I1011 09:32:15.900311 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbnrb"] Oct 11 09:32:15 crc kubenswrapper[5016]: I1011 09:32:15.932891 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-utilities\") pod \"certified-operators-bbnrb\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:15 crc kubenswrapper[5016]: I1011 09:32:15.932985 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kggkb\" (UniqueName: \"kubernetes.io/projected/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-kube-api-access-kggkb\") pod \"certified-operators-bbnrb\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:15 crc kubenswrapper[5016]: I1011 09:32:15.933119 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-catalog-content\") pod \"certified-operators-bbnrb\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:16 crc kubenswrapper[5016]: I1011 09:32:16.034814 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-utilities\") pod \"certified-operators-bbnrb\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:16 crc kubenswrapper[5016]: I1011 09:32:16.034870 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kggkb\" (UniqueName: \"kubernetes.io/projected/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-kube-api-access-kggkb\") pod \"certified-operators-bbnrb\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:16 crc kubenswrapper[5016]: I1011 09:32:16.034901 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-catalog-content\") pod \"certified-operators-bbnrb\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:16 crc kubenswrapper[5016]: I1011 09:32:16.035466 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-utilities\") pod \"certified-operators-bbnrb\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:16 crc kubenswrapper[5016]: I1011 09:32:16.035543 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-catalog-content\") pod \"certified-operators-bbnrb\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:16 crc kubenswrapper[5016]: I1011 09:32:16.066754 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kggkb\" (UniqueName: \"kubernetes.io/projected/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-kube-api-access-kggkb\") pod \"certified-operators-bbnrb\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:16 crc kubenswrapper[5016]: I1011 09:32:16.199644 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:16 crc kubenswrapper[5016]: I1011 09:32:16.798235 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbnrb"] Oct 11 09:32:16 crc kubenswrapper[5016]: W1011 09:32:16.804875 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode22d7f9d_4d7b_4943_9150_3bd2721b1a10.slice/crio-4bec132454d11767c22b678cf6cc851a7d8db4afbb870c4e63f711b69a2627ee WatchSource:0}: Error finding container 4bec132454d11767c22b678cf6cc851a7d8db4afbb870c4e63f711b69a2627ee: Status 404 returned error can't find the container with id 4bec132454d11767c22b678cf6cc851a7d8db4afbb870c4e63f711b69a2627ee Oct 11 09:32:17 crc kubenswrapper[5016]: I1011 09:32:17.509820 5016 generic.go:334] "Generic (PLEG): container finished" podID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerID="999eacb1c1d6d7e93665c34f3319deb287763809f18b4935634b5699f03b38dc" exitCode=0 Oct 11 09:32:17 crc kubenswrapper[5016]: I1011 09:32:17.509951 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbnrb" event={"ID":"e22d7f9d-4d7b-4943-9150-3bd2721b1a10","Type":"ContainerDied","Data":"999eacb1c1d6d7e93665c34f3319deb287763809f18b4935634b5699f03b38dc"} Oct 11 09:32:17 crc kubenswrapper[5016]: I1011 09:32:17.510346 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbnrb" event={"ID":"e22d7f9d-4d7b-4943-9150-3bd2721b1a10","Type":"ContainerStarted","Data":"4bec132454d11767c22b678cf6cc851a7d8db4afbb870c4e63f711b69a2627ee"} Oct 11 09:32:19 crc kubenswrapper[5016]: I1011 09:32:19.534786 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbnrb" event={"ID":"e22d7f9d-4d7b-4943-9150-3bd2721b1a10","Type":"ContainerStarted","Data":"a5fe194c05d2178d1b1dd481453edafab7a04164c7a4485ca9076a0de3f5b768"} Oct 11 09:32:20 crc kubenswrapper[5016]: I1011 09:32:20.546533 5016 generic.go:334] "Generic (PLEG): container finished" podID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerID="0a8a0ca24d50a081d2354c146b0d81916984f6b4d1cdd48d3b000c82e633e692" exitCode=0 Oct 11 09:32:20 crc kubenswrapper[5016]: I1011 09:32:20.546590 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq4kl" event={"ID":"4c1a9ace-fed7-4940-89b5-96ae499ed050","Type":"ContainerDied","Data":"0a8a0ca24d50a081d2354c146b0d81916984f6b4d1cdd48d3b000c82e633e692"} Oct 11 09:32:22 crc kubenswrapper[5016]: I1011 09:32:22.572367 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq4kl" event={"ID":"4c1a9ace-fed7-4940-89b5-96ae499ed050","Type":"ContainerStarted","Data":"a3b2ebc43bfa56f9555b643baed34bdf2ebe5931422028433f79acb9ba7e9ca2"} Oct 11 09:32:22 crc kubenswrapper[5016]: I1011 09:32:22.612754 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bq4kl" podStartSLOduration=3.289527347 podStartE2EDuration="11.612726991s" podCreationTimestamp="2025-10-11 09:32:11 +0000 UTC" firstStartedPulling="2025-10-11 09:32:13.411737031 +0000 UTC m=+6721.312193017" lastFinishedPulling="2025-10-11 09:32:21.734936705 +0000 UTC m=+6729.635392661" observedRunningTime="2025-10-11 09:32:22.595845204 +0000 UTC m=+6730.496301150" watchObservedRunningTime="2025-10-11 09:32:22.612726991 +0000 UTC m=+6730.513182937" Oct 11 09:32:23 crc kubenswrapper[5016]: I1011 09:32:23.585879 5016 generic.go:334] "Generic (PLEG): container finished" podID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerID="a5fe194c05d2178d1b1dd481453edafab7a04164c7a4485ca9076a0de3f5b768" exitCode=0 Oct 11 09:32:23 crc kubenswrapper[5016]: I1011 09:32:23.585943 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbnrb" event={"ID":"e22d7f9d-4d7b-4943-9150-3bd2721b1a10","Type":"ContainerDied","Data":"a5fe194c05d2178d1b1dd481453edafab7a04164c7a4485ca9076a0de3f5b768"} Oct 11 09:32:25 crc kubenswrapper[5016]: I1011 09:32:25.615199 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbnrb" event={"ID":"e22d7f9d-4d7b-4943-9150-3bd2721b1a10","Type":"ContainerStarted","Data":"b9e11337b9c5a4474abf2978e1a884c5047dae8dba78b7addfb4c4640d24b576"} Oct 11 09:32:25 crc kubenswrapper[5016]: I1011 09:32:25.648328 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bbnrb" podStartSLOduration=3.770345215 podStartE2EDuration="10.648292245s" podCreationTimestamp="2025-10-11 09:32:15 +0000 UTC" firstStartedPulling="2025-10-11 09:32:17.51361143 +0000 UTC m=+6725.414067376" lastFinishedPulling="2025-10-11 09:32:24.39155846 +0000 UTC m=+6732.292014406" observedRunningTime="2025-10-11 09:32:25.637469818 +0000 UTC m=+6733.537925784" watchObservedRunningTime="2025-10-11 09:32:25.648292245 +0000 UTC m=+6733.548748221" Oct 11 09:32:26 crc kubenswrapper[5016]: I1011 09:32:26.200625 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:26 crc kubenswrapper[5016]: I1011 09:32:26.201080 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:27 crc kubenswrapper[5016]: I1011 09:32:27.257335 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bbnrb" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerName="registry-server" probeResult="failure" output=< Oct 11 09:32:27 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 09:32:27 crc kubenswrapper[5016]: > Oct 11 09:32:32 crc kubenswrapper[5016]: I1011 09:32:32.220398 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:32 crc kubenswrapper[5016]: I1011 09:32:32.222159 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:32 crc kubenswrapper[5016]: I1011 09:32:32.267759 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:32 crc kubenswrapper[5016]: I1011 09:32:32.745031 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:32 crc kubenswrapper[5016]: I1011 09:32:32.796401 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq4kl"] Oct 11 09:32:34 crc kubenswrapper[5016]: I1011 09:32:34.703734 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bq4kl" podUID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerName="registry-server" containerID="cri-o://a3b2ebc43bfa56f9555b643baed34bdf2ebe5931422028433f79acb9ba7e9ca2" gracePeriod=2 Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.715006 5016 generic.go:334] "Generic (PLEG): container finished" podID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerID="a3b2ebc43bfa56f9555b643baed34bdf2ebe5931422028433f79acb9ba7e9ca2" exitCode=0 Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.715538 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq4kl" event={"ID":"4c1a9ace-fed7-4940-89b5-96ae499ed050","Type":"ContainerDied","Data":"a3b2ebc43bfa56f9555b643baed34bdf2ebe5931422028433f79acb9ba7e9ca2"} Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.715585 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq4kl" event={"ID":"4c1a9ace-fed7-4940-89b5-96ae499ed050","Type":"ContainerDied","Data":"8c851b8fab58c40aaffbd74f73ffa1bf6e499db15c904112be8f494a1992c93f"} Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.715604 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c851b8fab58c40aaffbd74f73ffa1bf6e499db15c904112be8f494a1992c93f" Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.717684 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.845498 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csbhl\" (UniqueName: \"kubernetes.io/projected/4c1a9ace-fed7-4940-89b5-96ae499ed050-kube-api-access-csbhl\") pod \"4c1a9ace-fed7-4940-89b5-96ae499ed050\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.845591 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-catalog-content\") pod \"4c1a9ace-fed7-4940-89b5-96ae499ed050\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.845683 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-utilities\") pod \"4c1a9ace-fed7-4940-89b5-96ae499ed050\" (UID: \"4c1a9ace-fed7-4940-89b5-96ae499ed050\") " Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.846964 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-utilities" (OuterVolumeSpecName: "utilities") pod "4c1a9ace-fed7-4940-89b5-96ae499ed050" (UID: "4c1a9ace-fed7-4940-89b5-96ae499ed050"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.853146 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1a9ace-fed7-4940-89b5-96ae499ed050-kube-api-access-csbhl" (OuterVolumeSpecName: "kube-api-access-csbhl") pod "4c1a9ace-fed7-4940-89b5-96ae499ed050" (UID: "4c1a9ace-fed7-4940-89b5-96ae499ed050"). InnerVolumeSpecName "kube-api-access-csbhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.896215 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c1a9ace-fed7-4940-89b5-96ae499ed050" (UID: "4c1a9ace-fed7-4940-89b5-96ae499ed050"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.948514 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csbhl\" (UniqueName: \"kubernetes.io/projected/4c1a9ace-fed7-4940-89b5-96ae499ed050-kube-api-access-csbhl\") on node \"crc\" DevicePath \"\"" Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.948566 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:32:35 crc kubenswrapper[5016]: I1011 09:32:35.948588 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c1a9ace-fed7-4940-89b5-96ae499ed050-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:32:36 crc kubenswrapper[5016]: I1011 09:32:36.258462 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:36 crc kubenswrapper[5016]: I1011 09:32:36.303780 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:36 crc kubenswrapper[5016]: I1011 09:32:36.725969 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq4kl" Oct 11 09:32:36 crc kubenswrapper[5016]: I1011 09:32:36.772948 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq4kl"] Oct 11 09:32:36 crc kubenswrapper[5016]: I1011 09:32:36.781008 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bq4kl"] Oct 11 09:32:36 crc kubenswrapper[5016]: I1011 09:32:36.954576 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbnrb"] Oct 11 09:32:37 crc kubenswrapper[5016]: I1011 09:32:37.144741 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1a9ace-fed7-4940-89b5-96ae499ed050" path="/var/lib/kubelet/pods/4c1a9ace-fed7-4940-89b5-96ae499ed050/volumes" Oct 11 09:32:37 crc kubenswrapper[5016]: I1011 09:32:37.733426 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bbnrb" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerName="registry-server" containerID="cri-o://b9e11337b9c5a4474abf2978e1a884c5047dae8dba78b7addfb4c4640d24b576" gracePeriod=2 Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.761427 5016 generic.go:334] "Generic (PLEG): container finished" podID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerID="b9e11337b9c5a4474abf2978e1a884c5047dae8dba78b7addfb4c4640d24b576" exitCode=0 Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.761855 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbnrb" event={"ID":"e22d7f9d-4d7b-4943-9150-3bd2721b1a10","Type":"ContainerDied","Data":"b9e11337b9c5a4474abf2978e1a884c5047dae8dba78b7addfb4c4640d24b576"} Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.761889 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbnrb" event={"ID":"e22d7f9d-4d7b-4943-9150-3bd2721b1a10","Type":"ContainerDied","Data":"4bec132454d11767c22b678cf6cc851a7d8db4afbb870c4e63f711b69a2627ee"} Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.761905 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bec132454d11767c22b678cf6cc851a7d8db4afbb870c4e63f711b69a2627ee" Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.788621 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.919337 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-catalog-content\") pod \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.919440 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kggkb\" (UniqueName: \"kubernetes.io/projected/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-kube-api-access-kggkb\") pod \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.919593 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-utilities\") pod \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\" (UID: \"e22d7f9d-4d7b-4943-9150-3bd2721b1a10\") " Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.921148 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-utilities" (OuterVolumeSpecName: "utilities") pod "e22d7f9d-4d7b-4943-9150-3bd2721b1a10" (UID: "e22d7f9d-4d7b-4943-9150-3bd2721b1a10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.927539 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-kube-api-access-kggkb" (OuterVolumeSpecName: "kube-api-access-kggkb") pod "e22d7f9d-4d7b-4943-9150-3bd2721b1a10" (UID: "e22d7f9d-4d7b-4943-9150-3bd2721b1a10"). InnerVolumeSpecName "kube-api-access-kggkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:32:38 crc kubenswrapper[5016]: I1011 09:32:38.988645 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e22d7f9d-4d7b-4943-9150-3bd2721b1a10" (UID: "e22d7f9d-4d7b-4943-9150-3bd2721b1a10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:32:39 crc kubenswrapper[5016]: I1011 09:32:39.023715 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:32:39 crc kubenswrapper[5016]: I1011 09:32:39.023824 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kggkb\" (UniqueName: \"kubernetes.io/projected/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-kube-api-access-kggkb\") on node \"crc\" DevicePath \"\"" Oct 11 09:32:39 crc kubenswrapper[5016]: I1011 09:32:39.023851 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d7f9d-4d7b-4943-9150-3bd2721b1a10-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:32:39 crc kubenswrapper[5016]: I1011 09:32:39.775041 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbnrb" Oct 11 09:32:39 crc kubenswrapper[5016]: I1011 09:32:39.813939 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbnrb"] Oct 11 09:32:39 crc kubenswrapper[5016]: I1011 09:32:39.824315 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bbnrb"] Oct 11 09:32:41 crc kubenswrapper[5016]: I1011 09:32:41.147564 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" path="/var/lib/kubelet/pods/e22d7f9d-4d7b-4943-9150-3bd2721b1a10/volumes" Oct 11 09:33:37 crc kubenswrapper[5016]: I1011 09:33:37.123078 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:33:37 crc kubenswrapper[5016]: I1011 09:33:37.123936 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.527416 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pnfw7"] Oct 11 09:33:41 crc kubenswrapper[5016]: E1011 09:33:41.528605 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerName="extract-content" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.528621 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerName="extract-content" Oct 11 09:33:41 crc kubenswrapper[5016]: E1011 09:33:41.528633 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerName="extract-content" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.528639 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerName="extract-content" Oct 11 09:33:41 crc kubenswrapper[5016]: E1011 09:33:41.528689 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerName="extract-utilities" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.528696 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerName="extract-utilities" Oct 11 09:33:41 crc kubenswrapper[5016]: E1011 09:33:41.528708 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerName="registry-server" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.528714 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerName="registry-server" Oct 11 09:33:41 crc kubenswrapper[5016]: E1011 09:33:41.528739 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerName="registry-server" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.528745 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerName="registry-server" Oct 11 09:33:41 crc kubenswrapper[5016]: E1011 09:33:41.528756 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerName="extract-utilities" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.528761 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerName="extract-utilities" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.528948 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="e22d7f9d-4d7b-4943-9150-3bd2721b1a10" containerName="registry-server" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.528977 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1a9ace-fed7-4940-89b5-96ae499ed050" containerName="registry-server" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.530495 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.559622 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pnfw7"] Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.643704 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-utilities\") pod \"redhat-operators-pnfw7\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.644285 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkdzs\" (UniqueName: \"kubernetes.io/projected/64e326c4-043d-4968-a44b-f5297940448d-kube-api-access-nkdzs\") pod \"redhat-operators-pnfw7\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.644714 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-catalog-content\") pod \"redhat-operators-pnfw7\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.747224 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkdzs\" (UniqueName: \"kubernetes.io/projected/64e326c4-043d-4968-a44b-f5297940448d-kube-api-access-nkdzs\") pod \"redhat-operators-pnfw7\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.747371 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-catalog-content\") pod \"redhat-operators-pnfw7\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.747457 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-utilities\") pod \"redhat-operators-pnfw7\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.747884 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-catalog-content\") pod \"redhat-operators-pnfw7\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.748256 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-utilities\") pod \"redhat-operators-pnfw7\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.770580 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkdzs\" (UniqueName: \"kubernetes.io/projected/64e326c4-043d-4968-a44b-f5297940448d-kube-api-access-nkdzs\") pod \"redhat-operators-pnfw7\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:41 crc kubenswrapper[5016]: I1011 09:33:41.866630 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:33:42 crc kubenswrapper[5016]: I1011 09:33:42.418778 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pnfw7"] Oct 11 09:33:42 crc kubenswrapper[5016]: E1011 09:33:42.959866 5016 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64e326c4_043d_4968_a44b_f5297940448d.slice/crio-5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64e326c4_043d_4968_a44b_f5297940448d.slice/crio-conmon-5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17.scope\": RecentStats: unable to find data in memory cache]" Oct 11 09:33:43 crc kubenswrapper[5016]: I1011 09:33:43.485118 5016 generic.go:334] "Generic (PLEG): container finished" podID="64e326c4-043d-4968-a44b-f5297940448d" containerID="5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17" exitCode=0 Oct 11 09:33:43 crc kubenswrapper[5016]: I1011 09:33:43.485201 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnfw7" event={"ID":"64e326c4-043d-4968-a44b-f5297940448d","Type":"ContainerDied","Data":"5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17"} Oct 11 09:33:43 crc kubenswrapper[5016]: I1011 09:33:43.485922 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnfw7" event={"ID":"64e326c4-043d-4968-a44b-f5297940448d","Type":"ContainerStarted","Data":"b14b01f9b67c37b7ac28ca4802931d2aea7b80a2f1dc4ea12da34f990f63f1e3"} Oct 11 09:33:45 crc kubenswrapper[5016]: I1011 09:33:45.513519 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnfw7" event={"ID":"64e326c4-043d-4968-a44b-f5297940448d","Type":"ContainerStarted","Data":"223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9"} Oct 11 09:33:55 crc kubenswrapper[5016]: I1011 09:33:55.648524 5016 generic.go:334] "Generic (PLEG): container finished" podID="64e326c4-043d-4968-a44b-f5297940448d" containerID="223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9" exitCode=0 Oct 11 09:33:55 crc kubenswrapper[5016]: I1011 09:33:55.648625 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnfw7" event={"ID":"64e326c4-043d-4968-a44b-f5297940448d","Type":"ContainerDied","Data":"223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9"} Oct 11 09:33:56 crc kubenswrapper[5016]: I1011 09:33:56.666116 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnfw7" event={"ID":"64e326c4-043d-4968-a44b-f5297940448d","Type":"ContainerStarted","Data":"59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b"} Oct 11 09:33:56 crc kubenswrapper[5016]: I1011 09:33:56.690392 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pnfw7" podStartSLOduration=2.963239495 podStartE2EDuration="15.690370608s" podCreationTimestamp="2025-10-11 09:33:41 +0000 UTC" firstStartedPulling="2025-10-11 09:33:43.488488053 +0000 UTC m=+6811.388943999" lastFinishedPulling="2025-10-11 09:33:56.215619166 +0000 UTC m=+6824.116075112" observedRunningTime="2025-10-11 09:33:56.685602222 +0000 UTC m=+6824.586058208" watchObservedRunningTime="2025-10-11 09:33:56.690370608 +0000 UTC m=+6824.590826544" Oct 11 09:34:01 crc kubenswrapper[5016]: I1011 09:34:01.867443 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:34:01 crc kubenswrapper[5016]: I1011 09:34:01.868369 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:34:01 crc kubenswrapper[5016]: I1011 09:34:01.937548 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:34:02 crc kubenswrapper[5016]: I1011 09:34:02.785692 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:34:02 crc kubenswrapper[5016]: I1011 09:34:02.839817 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pnfw7"] Oct 11 09:34:04 crc kubenswrapper[5016]: I1011 09:34:04.761047 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pnfw7" podUID="64e326c4-043d-4968-a44b-f5297940448d" containerName="registry-server" containerID="cri-o://59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b" gracePeriod=2 Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.334710 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.348556 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-catalog-content\") pod \"64e326c4-043d-4968-a44b-f5297940448d\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.348851 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-utilities\") pod \"64e326c4-043d-4968-a44b-f5297940448d\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.349068 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkdzs\" (UniqueName: \"kubernetes.io/projected/64e326c4-043d-4968-a44b-f5297940448d-kube-api-access-nkdzs\") pod \"64e326c4-043d-4968-a44b-f5297940448d\" (UID: \"64e326c4-043d-4968-a44b-f5297940448d\") " Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.349923 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-utilities" (OuterVolumeSpecName: "utilities") pod "64e326c4-043d-4968-a44b-f5297940448d" (UID: "64e326c4-043d-4968-a44b-f5297940448d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.350560 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.369094 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e326c4-043d-4968-a44b-f5297940448d-kube-api-access-nkdzs" (OuterVolumeSpecName: "kube-api-access-nkdzs") pod "64e326c4-043d-4968-a44b-f5297940448d" (UID: "64e326c4-043d-4968-a44b-f5297940448d"). InnerVolumeSpecName "kube-api-access-nkdzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.433172 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64e326c4-043d-4968-a44b-f5297940448d" (UID: "64e326c4-043d-4968-a44b-f5297940448d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.451956 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkdzs\" (UniqueName: \"kubernetes.io/projected/64e326c4-043d-4968-a44b-f5297940448d-kube-api-access-nkdzs\") on node \"crc\" DevicePath \"\"" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.452004 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64e326c4-043d-4968-a44b-f5297940448d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.772611 5016 generic.go:334] "Generic (PLEG): container finished" podID="64e326c4-043d-4968-a44b-f5297940448d" containerID="59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b" exitCode=0 Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.772694 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnfw7" event={"ID":"64e326c4-043d-4968-a44b-f5297940448d","Type":"ContainerDied","Data":"59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b"} Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.772734 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pnfw7" event={"ID":"64e326c4-043d-4968-a44b-f5297940448d","Type":"ContainerDied","Data":"b14b01f9b67c37b7ac28ca4802931d2aea7b80a2f1dc4ea12da34f990f63f1e3"} Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.772746 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pnfw7" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.772757 5016 scope.go:117] "RemoveContainer" containerID="59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.793163 5016 scope.go:117] "RemoveContainer" containerID="223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.815805 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pnfw7"] Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.824817 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pnfw7"] Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.834721 5016 scope.go:117] "RemoveContainer" containerID="5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.873855 5016 scope.go:117] "RemoveContainer" containerID="59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b" Oct 11 09:34:05 crc kubenswrapper[5016]: E1011 09:34:05.874570 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b\": container with ID starting with 59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b not found: ID does not exist" containerID="59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.874671 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b"} err="failed to get container status \"59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b\": rpc error: code = NotFound desc = could not find container \"59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b\": container with ID starting with 59a60d00a008075861041ddd4852d0148ab69f256b8f574203243e1d7aebbb0b not found: ID does not exist" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.874721 5016 scope.go:117] "RemoveContainer" containerID="223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9" Oct 11 09:34:05 crc kubenswrapper[5016]: E1011 09:34:05.875266 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9\": container with ID starting with 223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9 not found: ID does not exist" containerID="223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.875347 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9"} err="failed to get container status \"223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9\": rpc error: code = NotFound desc = could not find container \"223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9\": container with ID starting with 223c994673c9d6ecc253b2ce6e05c3e74dac04fc94ee46f3f817f902cbc3a6a9 not found: ID does not exist" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.875390 5016 scope.go:117] "RemoveContainer" containerID="5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17" Oct 11 09:34:05 crc kubenswrapper[5016]: E1011 09:34:05.875822 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17\": container with ID starting with 5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17 not found: ID does not exist" containerID="5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17" Oct 11 09:34:05 crc kubenswrapper[5016]: I1011 09:34:05.875858 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17"} err="failed to get container status \"5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17\": rpc error: code = NotFound desc = could not find container \"5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17\": container with ID starting with 5778bf6b805d8a65aef31a30ede9f13fd411973a589c77f2f6cb9c9c139c9b17 not found: ID does not exist" Oct 11 09:34:07 crc kubenswrapper[5016]: I1011 09:34:07.122235 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:34:07 crc kubenswrapper[5016]: I1011 09:34:07.122919 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:34:07 crc kubenswrapper[5016]: I1011 09:34:07.151114 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64e326c4-043d-4968-a44b-f5297940448d" path="/var/lib/kubelet/pods/64e326c4-043d-4968-a44b-f5297940448d/volumes" Oct 11 09:34:37 crc kubenswrapper[5016]: I1011 09:34:37.122556 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:34:37 crc kubenswrapper[5016]: I1011 09:34:37.123313 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:34:37 crc kubenswrapper[5016]: I1011 09:34:37.123358 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:34:37 crc kubenswrapper[5016]: I1011 09:34:37.124143 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05b2883651c982a6a04c187acf3e457a66e58f38612201a9ddc672a141007ce1"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:34:37 crc kubenswrapper[5016]: I1011 09:34:37.124198 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://05b2883651c982a6a04c187acf3e457a66e58f38612201a9ddc672a141007ce1" gracePeriod=600 Oct 11 09:34:38 crc kubenswrapper[5016]: I1011 09:34:38.130970 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="05b2883651c982a6a04c187acf3e457a66e58f38612201a9ddc672a141007ce1" exitCode=0 Oct 11 09:34:38 crc kubenswrapper[5016]: I1011 09:34:38.131053 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"05b2883651c982a6a04c187acf3e457a66e58f38612201a9ddc672a141007ce1"} Oct 11 09:34:38 crc kubenswrapper[5016]: I1011 09:34:38.131792 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae"} Oct 11 09:34:38 crc kubenswrapper[5016]: I1011 09:34:38.131825 5016 scope.go:117] "RemoveContainer" containerID="a7495277496cde94ec5426c554032940ad24f7c3d6bf93e5f1777fd80be893ba" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.424801 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wf8hv"] Oct 11 09:35:08 crc kubenswrapper[5016]: E1011 09:35:08.426737 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e326c4-043d-4968-a44b-f5297940448d" containerName="registry-server" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.426760 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e326c4-043d-4968-a44b-f5297940448d" containerName="registry-server" Oct 11 09:35:08 crc kubenswrapper[5016]: E1011 09:35:08.426789 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e326c4-043d-4968-a44b-f5297940448d" containerName="extract-content" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.426797 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e326c4-043d-4968-a44b-f5297940448d" containerName="extract-content" Oct 11 09:35:08 crc kubenswrapper[5016]: E1011 09:35:08.426821 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e326c4-043d-4968-a44b-f5297940448d" containerName="extract-utilities" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.426830 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e326c4-043d-4968-a44b-f5297940448d" containerName="extract-utilities" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.427063 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e326c4-043d-4968-a44b-f5297940448d" containerName="registry-server" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.429542 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.437922 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wf8hv"] Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.603507 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6x4\" (UniqueName: \"kubernetes.io/projected/152ecae0-5ddb-4767-9c59-cbac61c75e1e-kube-api-access-7q6x4\") pod \"redhat-marketplace-wf8hv\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.603727 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-utilities\") pod \"redhat-marketplace-wf8hv\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.603807 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-catalog-content\") pod \"redhat-marketplace-wf8hv\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.706068 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-utilities\") pod \"redhat-marketplace-wf8hv\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.706207 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-catalog-content\") pod \"redhat-marketplace-wf8hv\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.706263 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q6x4\" (UniqueName: \"kubernetes.io/projected/152ecae0-5ddb-4767-9c59-cbac61c75e1e-kube-api-access-7q6x4\") pod \"redhat-marketplace-wf8hv\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.706778 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-utilities\") pod \"redhat-marketplace-wf8hv\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.707012 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-catalog-content\") pod \"redhat-marketplace-wf8hv\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.731627 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q6x4\" (UniqueName: \"kubernetes.io/projected/152ecae0-5ddb-4767-9c59-cbac61c75e1e-kube-api-access-7q6x4\") pod \"redhat-marketplace-wf8hv\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:08 crc kubenswrapper[5016]: I1011 09:35:08.793111 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:09 crc kubenswrapper[5016]: I1011 09:35:09.259274 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wf8hv"] Oct 11 09:35:09 crc kubenswrapper[5016]: I1011 09:35:09.493811 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wf8hv" event={"ID":"152ecae0-5ddb-4767-9c59-cbac61c75e1e","Type":"ContainerStarted","Data":"43cc6b25a7e56d8a364eb9d56ff85a7bae6f68768aa6b0df3947f4a410a83fde"} Oct 11 09:35:10 crc kubenswrapper[5016]: I1011 09:35:10.506194 5016 generic.go:334] "Generic (PLEG): container finished" podID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerID="cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a" exitCode=0 Oct 11 09:35:10 crc kubenswrapper[5016]: I1011 09:35:10.506320 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wf8hv" event={"ID":"152ecae0-5ddb-4767-9c59-cbac61c75e1e","Type":"ContainerDied","Data":"cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a"} Oct 11 09:35:11 crc kubenswrapper[5016]: I1011 09:35:11.517620 5016 generic.go:334] "Generic (PLEG): container finished" podID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerID="3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa" exitCode=0 Oct 11 09:35:11 crc kubenswrapper[5016]: I1011 09:35:11.517728 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wf8hv" event={"ID":"152ecae0-5ddb-4767-9c59-cbac61c75e1e","Type":"ContainerDied","Data":"3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa"} Oct 11 09:35:12 crc kubenswrapper[5016]: I1011 09:35:12.535248 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wf8hv" event={"ID":"152ecae0-5ddb-4767-9c59-cbac61c75e1e","Type":"ContainerStarted","Data":"720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9"} Oct 11 09:35:12 crc kubenswrapper[5016]: I1011 09:35:12.559554 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wf8hv" podStartSLOduration=3.158295448 podStartE2EDuration="4.559529875s" podCreationTimestamp="2025-10-11 09:35:08 +0000 UTC" firstStartedPulling="2025-10-11 09:35:10.509271812 +0000 UTC m=+6898.409727758" lastFinishedPulling="2025-10-11 09:35:11.910506249 +0000 UTC m=+6899.810962185" observedRunningTime="2025-10-11 09:35:12.557362697 +0000 UTC m=+6900.457818683" watchObservedRunningTime="2025-10-11 09:35:12.559529875 +0000 UTC m=+6900.459985851" Oct 11 09:35:18 crc kubenswrapper[5016]: I1011 09:35:18.794507 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:18 crc kubenswrapper[5016]: I1011 09:35:18.795401 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:18 crc kubenswrapper[5016]: I1011 09:35:18.849705 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:19 crc kubenswrapper[5016]: I1011 09:35:19.652720 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:19 crc kubenswrapper[5016]: I1011 09:35:19.711218 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wf8hv"] Oct 11 09:35:21 crc kubenswrapper[5016]: I1011 09:35:21.632538 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wf8hv" podUID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerName="registry-server" containerID="cri-o://720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9" gracePeriod=2 Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.101546 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.229695 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q6x4\" (UniqueName: \"kubernetes.io/projected/152ecae0-5ddb-4767-9c59-cbac61c75e1e-kube-api-access-7q6x4\") pod \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.229762 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-utilities\") pod \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.229787 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-catalog-content\") pod \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\" (UID: \"152ecae0-5ddb-4767-9c59-cbac61c75e1e\") " Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.232961 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-utilities" (OuterVolumeSpecName: "utilities") pod "152ecae0-5ddb-4767-9c59-cbac61c75e1e" (UID: "152ecae0-5ddb-4767-9c59-cbac61c75e1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.240345 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152ecae0-5ddb-4767-9c59-cbac61c75e1e-kube-api-access-7q6x4" (OuterVolumeSpecName: "kube-api-access-7q6x4") pod "152ecae0-5ddb-4767-9c59-cbac61c75e1e" (UID: "152ecae0-5ddb-4767-9c59-cbac61c75e1e"). InnerVolumeSpecName "kube-api-access-7q6x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.251921 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "152ecae0-5ddb-4767-9c59-cbac61c75e1e" (UID: "152ecae0-5ddb-4767-9c59-cbac61c75e1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.332796 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q6x4\" (UniqueName: \"kubernetes.io/projected/152ecae0-5ddb-4767-9c59-cbac61c75e1e-kube-api-access-7q6x4\") on node \"crc\" DevicePath \"\"" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.332845 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.332855 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/152ecae0-5ddb-4767-9c59-cbac61c75e1e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.645847 5016 generic.go:334] "Generic (PLEG): container finished" podID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerID="720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9" exitCode=0 Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.645905 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wf8hv" event={"ID":"152ecae0-5ddb-4767-9c59-cbac61c75e1e","Type":"ContainerDied","Data":"720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9"} Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.645938 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wf8hv" event={"ID":"152ecae0-5ddb-4767-9c59-cbac61c75e1e","Type":"ContainerDied","Data":"43cc6b25a7e56d8a364eb9d56ff85a7bae6f68768aa6b0df3947f4a410a83fde"} Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.646001 5016 scope.go:117] "RemoveContainer" containerID="720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.646191 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wf8hv" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.698951 5016 scope.go:117] "RemoveContainer" containerID="3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.703774 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wf8hv"] Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.717296 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wf8hv"] Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.721101 5016 scope.go:117] "RemoveContainer" containerID="cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.775278 5016 scope.go:117] "RemoveContainer" containerID="720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9" Oct 11 09:35:22 crc kubenswrapper[5016]: E1011 09:35:22.776229 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9\": container with ID starting with 720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9 not found: ID does not exist" containerID="720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.776276 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9"} err="failed to get container status \"720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9\": rpc error: code = NotFound desc = could not find container \"720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9\": container with ID starting with 720b88a672d7e81ec1ee5f317369a4a1e814c253a42c66fe52006cf5798a1fa9 not found: ID does not exist" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.776305 5016 scope.go:117] "RemoveContainer" containerID="3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa" Oct 11 09:35:22 crc kubenswrapper[5016]: E1011 09:35:22.776980 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa\": container with ID starting with 3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa not found: ID does not exist" containerID="3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.777274 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa"} err="failed to get container status \"3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa\": rpc error: code = NotFound desc = could not find container \"3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa\": container with ID starting with 3e3a11a83807b7161a59f0dba73ccaf756b9569b69a8739ecb316e0e7947b9aa not found: ID does not exist" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.777327 5016 scope.go:117] "RemoveContainer" containerID="cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a" Oct 11 09:35:22 crc kubenswrapper[5016]: E1011 09:35:22.777847 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a\": container with ID starting with cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a not found: ID does not exist" containerID="cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a" Oct 11 09:35:22 crc kubenswrapper[5016]: I1011 09:35:22.777887 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a"} err="failed to get container status \"cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a\": rpc error: code = NotFound desc = could not find container \"cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a\": container with ID starting with cde33e15b4dfd83fb0d2ce1b475f289662df8360d0e571721a610b2abc6e503a not found: ID does not exist" Oct 11 09:35:23 crc kubenswrapper[5016]: I1011 09:35:23.147410 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" path="/var/lib/kubelet/pods/152ecae0-5ddb-4767-9c59-cbac61c75e1e/volumes" Oct 11 09:36:37 crc kubenswrapper[5016]: I1011 09:36:37.122850 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:36:37 crc kubenswrapper[5016]: I1011 09:36:37.123786 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:37:07 crc kubenswrapper[5016]: I1011 09:37:07.122563 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:37:07 crc kubenswrapper[5016]: I1011 09:37:07.123387 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:37:37 crc kubenswrapper[5016]: I1011 09:37:37.122487 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:37:37 crc kubenswrapper[5016]: I1011 09:37:37.123180 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:37:37 crc kubenswrapper[5016]: I1011 09:37:37.123227 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:37:37 crc kubenswrapper[5016]: I1011 09:37:37.123983 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:37:37 crc kubenswrapper[5016]: I1011 09:37:37.124037 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" gracePeriod=600 Oct 11 09:37:37 crc kubenswrapper[5016]: E1011 09:37:37.247095 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:37:38 crc kubenswrapper[5016]: I1011 09:37:38.026700 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" exitCode=0 Oct 11 09:37:38 crc kubenswrapper[5016]: I1011 09:37:38.026743 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae"} Oct 11 09:37:38 crc kubenswrapper[5016]: I1011 09:37:38.026778 5016 scope.go:117] "RemoveContainer" containerID="05b2883651c982a6a04c187acf3e457a66e58f38612201a9ddc672a141007ce1" Oct 11 09:37:38 crc kubenswrapper[5016]: I1011 09:37:38.027441 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:37:38 crc kubenswrapper[5016]: E1011 09:37:38.027823 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:37:53 crc kubenswrapper[5016]: I1011 09:37:53.147755 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:37:53 crc kubenswrapper[5016]: E1011 09:37:53.148722 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:38:07 crc kubenswrapper[5016]: I1011 09:38:07.133093 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:38:07 crc kubenswrapper[5016]: E1011 09:38:07.133947 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:38:19 crc kubenswrapper[5016]: I1011 09:38:19.133195 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:38:19 crc kubenswrapper[5016]: E1011 09:38:19.134186 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:38:33 crc kubenswrapper[5016]: I1011 09:38:33.138365 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:38:33 crc kubenswrapper[5016]: E1011 09:38:33.139083 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:38:44 crc kubenswrapper[5016]: I1011 09:38:44.340820 5016 scope.go:117] "RemoveContainer" containerID="999eacb1c1d6d7e93665c34f3319deb287763809f18b4935634b5699f03b38dc" Oct 11 09:38:44 crc kubenswrapper[5016]: I1011 09:38:44.386033 5016 scope.go:117] "RemoveContainer" containerID="a5fe194c05d2178d1b1dd481453edafab7a04164c7a4485ca9076a0de3f5b768" Oct 11 09:38:44 crc kubenswrapper[5016]: I1011 09:38:44.424857 5016 scope.go:117] "RemoveContainer" containerID="0a8a0ca24d50a081d2354c146b0d81916984f6b4d1cdd48d3b000c82e633e692" Oct 11 09:38:44 crc kubenswrapper[5016]: I1011 09:38:44.464056 5016 scope.go:117] "RemoveContainer" containerID="3c15b1ba225f8b180b7836258bd66b69bebe57cf2e2334de76888b066066727c" Oct 11 09:38:44 crc kubenswrapper[5016]: I1011 09:38:44.510811 5016 scope.go:117] "RemoveContainer" containerID="a3b2ebc43bfa56f9555b643baed34bdf2ebe5931422028433f79acb9ba7e9ca2" Oct 11 09:38:44 crc kubenswrapper[5016]: I1011 09:38:44.554548 5016 scope.go:117] "RemoveContainer" containerID="b9e11337b9c5a4474abf2978e1a884c5047dae8dba78b7addfb4c4640d24b576" Oct 11 09:38:47 crc kubenswrapper[5016]: I1011 09:38:47.134010 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:38:47 crc kubenswrapper[5016]: E1011 09:38:47.135220 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:39:01 crc kubenswrapper[5016]: I1011 09:39:01.133425 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:39:01 crc kubenswrapper[5016]: E1011 09:39:01.134276 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:39:12 crc kubenswrapper[5016]: I1011 09:39:12.134353 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:39:12 crc kubenswrapper[5016]: E1011 09:39:12.135929 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:39:26 crc kubenswrapper[5016]: I1011 09:39:26.134489 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:39:26 crc kubenswrapper[5016]: E1011 09:39:26.135770 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:39:37 crc kubenswrapper[5016]: I1011 09:39:37.134348 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:39:37 crc kubenswrapper[5016]: E1011 09:39:37.135797 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:39:50 crc kubenswrapper[5016]: I1011 09:39:50.133474 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:39:50 crc kubenswrapper[5016]: E1011 09:39:50.134367 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:40:02 crc kubenswrapper[5016]: I1011 09:40:02.133966 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:40:02 crc kubenswrapper[5016]: E1011 09:40:02.134881 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:40:15 crc kubenswrapper[5016]: I1011 09:40:15.133388 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:40:15 crc kubenswrapper[5016]: E1011 09:40:15.134425 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:40:26 crc kubenswrapper[5016]: I1011 09:40:26.133797 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:40:26 crc kubenswrapper[5016]: E1011 09:40:26.134762 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:40:41 crc kubenswrapper[5016]: I1011 09:40:41.173006 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:40:41 crc kubenswrapper[5016]: E1011 09:40:41.175221 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:40:54 crc kubenswrapper[5016]: I1011 09:40:54.133589 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:40:54 crc kubenswrapper[5016]: E1011 09:40:54.134587 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:41:07 crc kubenswrapper[5016]: I1011 09:41:07.135886 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:41:07 crc kubenswrapper[5016]: E1011 09:41:07.137071 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:41:21 crc kubenswrapper[5016]: I1011 09:41:21.133572 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:41:21 crc kubenswrapper[5016]: E1011 09:41:21.134486 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:41:33 crc kubenswrapper[5016]: I1011 09:41:33.139771 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:41:33 crc kubenswrapper[5016]: E1011 09:41:33.140457 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:41:47 crc kubenswrapper[5016]: I1011 09:41:47.135399 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:41:47 crc kubenswrapper[5016]: E1011 09:41:47.136564 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:42:00 crc kubenswrapper[5016]: I1011 09:42:00.134074 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:42:00 crc kubenswrapper[5016]: E1011 09:42:00.135355 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:42:15 crc kubenswrapper[5016]: I1011 09:42:15.134281 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:42:15 crc kubenswrapper[5016]: E1011 09:42:15.136052 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:42:29 crc kubenswrapper[5016]: I1011 09:42:29.133086 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:42:29 crc kubenswrapper[5016]: E1011 09:42:29.133993 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:42:41 crc kubenswrapper[5016]: I1011 09:42:41.133420 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:42:42 crc kubenswrapper[5016]: I1011 09:42:42.201644 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"f48a1030019a6b6a35bcdf3b180215d9d8e0d5b3e7e072fed5d760a39b504042"} Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.336053 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f28xc"] Oct 11 09:43:10 crc kubenswrapper[5016]: E1011 09:43:10.337058 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerName="extract-utilities" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.337072 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerName="extract-utilities" Oct 11 09:43:10 crc kubenswrapper[5016]: E1011 09:43:10.337086 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerName="registry-server" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.337092 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerName="registry-server" Oct 11 09:43:10 crc kubenswrapper[5016]: E1011 09:43:10.337109 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerName="extract-content" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.337117 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerName="extract-content" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.337288 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="152ecae0-5ddb-4767-9c59-cbac61c75e1e" containerName="registry-server" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.338631 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.373059 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f28xc"] Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.442688 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-utilities\") pod \"community-operators-f28xc\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.442861 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdtw\" (UniqueName: \"kubernetes.io/projected/e664472f-6f37-4ace-84ca-7999b97b0a2e-kube-api-access-bmdtw\") pod \"community-operators-f28xc\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.444135 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-catalog-content\") pod \"community-operators-f28xc\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.546468 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmdtw\" (UniqueName: \"kubernetes.io/projected/e664472f-6f37-4ace-84ca-7999b97b0a2e-kube-api-access-bmdtw\") pod \"community-operators-f28xc\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.546675 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-catalog-content\") pod \"community-operators-f28xc\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.546713 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-utilities\") pod \"community-operators-f28xc\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.547124 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-catalog-content\") pod \"community-operators-f28xc\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.547206 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-utilities\") pod \"community-operators-f28xc\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.572547 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmdtw\" (UniqueName: \"kubernetes.io/projected/e664472f-6f37-4ace-84ca-7999b97b0a2e-kube-api-access-bmdtw\") pod \"community-operators-f28xc\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:10 crc kubenswrapper[5016]: I1011 09:43:10.656791 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:11 crc kubenswrapper[5016]: I1011 09:43:11.230580 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f28xc"] Oct 11 09:43:11 crc kubenswrapper[5016]: I1011 09:43:11.538056 5016 generic.go:334] "Generic (PLEG): container finished" podID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerID="8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4" exitCode=0 Oct 11 09:43:11 crc kubenswrapper[5016]: I1011 09:43:11.538151 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f28xc" event={"ID":"e664472f-6f37-4ace-84ca-7999b97b0a2e","Type":"ContainerDied","Data":"8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4"} Oct 11 09:43:11 crc kubenswrapper[5016]: I1011 09:43:11.538520 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f28xc" event={"ID":"e664472f-6f37-4ace-84ca-7999b97b0a2e","Type":"ContainerStarted","Data":"7b63c771c3d89149f5e54277c08313f445706b1149f075a354b119ca0c90bd86"} Oct 11 09:43:11 crc kubenswrapper[5016]: I1011 09:43:11.540403 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.550480 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f28xc" event={"ID":"e664472f-6f37-4ace-84ca-7999b97b0a2e","Type":"ContainerStarted","Data":"a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb"} Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.739119 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hjzvb"] Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.742043 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.759437 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hjzvb"] Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.892625 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2hjg\" (UniqueName: \"kubernetes.io/projected/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-kube-api-access-v2hjg\") pod \"certified-operators-hjzvb\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.893010 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-catalog-content\") pod \"certified-operators-hjzvb\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.893650 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-utilities\") pod \"certified-operators-hjzvb\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.995572 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2hjg\" (UniqueName: \"kubernetes.io/projected/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-kube-api-access-v2hjg\") pod \"certified-operators-hjzvb\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.995746 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-catalog-content\") pod \"certified-operators-hjzvb\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.995881 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-utilities\") pod \"certified-operators-hjzvb\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.996424 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-catalog-content\") pod \"certified-operators-hjzvb\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:12 crc kubenswrapper[5016]: I1011 09:43:12.996450 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-utilities\") pod \"certified-operators-hjzvb\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:13 crc kubenswrapper[5016]: I1011 09:43:13.029775 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2hjg\" (UniqueName: \"kubernetes.io/projected/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-kube-api-access-v2hjg\") pod \"certified-operators-hjzvb\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:13 crc kubenswrapper[5016]: I1011 09:43:13.061799 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:13 crc kubenswrapper[5016]: I1011 09:43:13.563370 5016 generic.go:334] "Generic (PLEG): container finished" podID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerID="a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb" exitCode=0 Oct 11 09:43:13 crc kubenswrapper[5016]: I1011 09:43:13.563431 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f28xc" event={"ID":"e664472f-6f37-4ace-84ca-7999b97b0a2e","Type":"ContainerDied","Data":"a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb"} Oct 11 09:43:13 crc kubenswrapper[5016]: I1011 09:43:13.625290 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hjzvb"] Oct 11 09:43:14 crc kubenswrapper[5016]: I1011 09:43:14.578610 5016 generic.go:334] "Generic (PLEG): container finished" podID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerID="cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5" exitCode=0 Oct 11 09:43:14 crc kubenswrapper[5016]: I1011 09:43:14.578707 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjzvb" event={"ID":"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66","Type":"ContainerDied","Data":"cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5"} Oct 11 09:43:14 crc kubenswrapper[5016]: I1011 09:43:14.579534 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjzvb" event={"ID":"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66","Type":"ContainerStarted","Data":"6e9779923aa609eab89027a62b9853a5a99b42d120d3ba8b101c3f1cde11a7f5"} Oct 11 09:43:14 crc kubenswrapper[5016]: I1011 09:43:14.582938 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f28xc" event={"ID":"e664472f-6f37-4ace-84ca-7999b97b0a2e","Type":"ContainerStarted","Data":"179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961"} Oct 11 09:43:16 crc kubenswrapper[5016]: I1011 09:43:16.611321 5016 generic.go:334] "Generic (PLEG): container finished" podID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerID="2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a" exitCode=0 Oct 11 09:43:16 crc kubenswrapper[5016]: I1011 09:43:16.611616 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjzvb" event={"ID":"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66","Type":"ContainerDied","Data":"2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a"} Oct 11 09:43:16 crc kubenswrapper[5016]: I1011 09:43:16.639456 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f28xc" podStartSLOduration=4.234219499 podStartE2EDuration="6.639429115s" podCreationTimestamp="2025-10-11 09:43:10 +0000 UTC" firstStartedPulling="2025-10-11 09:43:11.540157099 +0000 UTC m=+7379.440613055" lastFinishedPulling="2025-10-11 09:43:13.945366685 +0000 UTC m=+7381.845822671" observedRunningTime="2025-10-11 09:43:14.632211584 +0000 UTC m=+7382.532667540" watchObservedRunningTime="2025-10-11 09:43:16.639429115 +0000 UTC m=+7384.539885071" Oct 11 09:43:17 crc kubenswrapper[5016]: I1011 09:43:17.631997 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjzvb" event={"ID":"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66","Type":"ContainerStarted","Data":"06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa"} Oct 11 09:43:17 crc kubenswrapper[5016]: I1011 09:43:17.653474 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hjzvb" podStartSLOduration=3.180943659 podStartE2EDuration="5.653432841s" podCreationTimestamp="2025-10-11 09:43:12 +0000 UTC" firstStartedPulling="2025-10-11 09:43:14.580616865 +0000 UTC m=+7382.481072811" lastFinishedPulling="2025-10-11 09:43:17.053106037 +0000 UTC m=+7384.953561993" observedRunningTime="2025-10-11 09:43:17.648602852 +0000 UTC m=+7385.549058818" watchObservedRunningTime="2025-10-11 09:43:17.653432841 +0000 UTC m=+7385.553888787" Oct 11 09:43:20 crc kubenswrapper[5016]: I1011 09:43:20.657988 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:20 crc kubenswrapper[5016]: I1011 09:43:20.658365 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:20 crc kubenswrapper[5016]: I1011 09:43:20.729338 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:21 crc kubenswrapper[5016]: I1011 09:43:21.775772 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:21 crc kubenswrapper[5016]: I1011 09:43:21.934709 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f28xc"] Oct 11 09:43:23 crc kubenswrapper[5016]: I1011 09:43:23.062756 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:23 crc kubenswrapper[5016]: I1011 09:43:23.062945 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:23 crc kubenswrapper[5016]: I1011 09:43:23.159334 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:23 crc kubenswrapper[5016]: I1011 09:43:23.713301 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f28xc" podUID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerName="registry-server" containerID="cri-o://179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961" gracePeriod=2 Oct 11 09:43:23 crc kubenswrapper[5016]: I1011 09:43:23.776854 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.340161 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hjzvb"] Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.478309 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.569460 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmdtw\" (UniqueName: \"kubernetes.io/projected/e664472f-6f37-4ace-84ca-7999b97b0a2e-kube-api-access-bmdtw\") pod \"e664472f-6f37-4ace-84ca-7999b97b0a2e\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.569854 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-utilities\") pod \"e664472f-6f37-4ace-84ca-7999b97b0a2e\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.570282 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-catalog-content\") pod \"e664472f-6f37-4ace-84ca-7999b97b0a2e\" (UID: \"e664472f-6f37-4ace-84ca-7999b97b0a2e\") " Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.571608 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-utilities" (OuterVolumeSpecName: "utilities") pod "e664472f-6f37-4ace-84ca-7999b97b0a2e" (UID: "e664472f-6f37-4ace-84ca-7999b97b0a2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.581052 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e664472f-6f37-4ace-84ca-7999b97b0a2e-kube-api-access-bmdtw" (OuterVolumeSpecName: "kube-api-access-bmdtw") pod "e664472f-6f37-4ace-84ca-7999b97b0a2e" (UID: "e664472f-6f37-4ace-84ca-7999b97b0a2e"). InnerVolumeSpecName "kube-api-access-bmdtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.640401 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e664472f-6f37-4ace-84ca-7999b97b0a2e" (UID: "e664472f-6f37-4ace-84ca-7999b97b0a2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.673595 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.673683 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e664472f-6f37-4ace-84ca-7999b97b0a2e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.673709 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmdtw\" (UniqueName: \"kubernetes.io/projected/e664472f-6f37-4ace-84ca-7999b97b0a2e-kube-api-access-bmdtw\") on node \"crc\" DevicePath \"\"" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.739232 5016 generic.go:334] "Generic (PLEG): container finished" podID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerID="179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961" exitCode=0 Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.739929 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f28xc" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.739949 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f28xc" event={"ID":"e664472f-6f37-4ace-84ca-7999b97b0a2e","Type":"ContainerDied","Data":"179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961"} Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.740102 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f28xc" event={"ID":"e664472f-6f37-4ace-84ca-7999b97b0a2e","Type":"ContainerDied","Data":"7b63c771c3d89149f5e54277c08313f445706b1149f075a354b119ca0c90bd86"} Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.740130 5016 scope.go:117] "RemoveContainer" containerID="179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.783946 5016 scope.go:117] "RemoveContainer" containerID="a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.818358 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f28xc"] Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.821989 5016 scope.go:117] "RemoveContainer" containerID="8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.834703 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f28xc"] Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.864221 5016 scope.go:117] "RemoveContainer" containerID="179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961" Oct 11 09:43:24 crc kubenswrapper[5016]: E1011 09:43:24.864974 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961\": container with ID starting with 179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961 not found: ID does not exist" containerID="179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.865077 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961"} err="failed to get container status \"179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961\": rpc error: code = NotFound desc = could not find container \"179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961\": container with ID starting with 179dab9d94769e9ef583b942831d6f86c595e6f15d942d402e1ea50c8b6c7961 not found: ID does not exist" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.865133 5016 scope.go:117] "RemoveContainer" containerID="a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb" Oct 11 09:43:24 crc kubenswrapper[5016]: E1011 09:43:24.865773 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb\": container with ID starting with a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb not found: ID does not exist" containerID="a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.865819 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb"} err="failed to get container status \"a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb\": rpc error: code = NotFound desc = could not find container \"a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb\": container with ID starting with a41c95f761a852191dec03c64a659375eed2b4885a5442d76126d4b0539deebb not found: ID does not exist" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.865847 5016 scope.go:117] "RemoveContainer" containerID="8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4" Oct 11 09:43:24 crc kubenswrapper[5016]: E1011 09:43:24.866452 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4\": container with ID starting with 8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4 not found: ID does not exist" containerID="8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4" Oct 11 09:43:24 crc kubenswrapper[5016]: I1011 09:43:24.866525 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4"} err="failed to get container status \"8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4\": rpc error: code = NotFound desc = could not find container \"8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4\": container with ID starting with 8cbe8b62415459fbb3ac0dd6d8747df50323d825821e15fdd2cb90c697e61cd4 not found: ID does not exist" Oct 11 09:43:25 crc kubenswrapper[5016]: I1011 09:43:25.145097 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e664472f-6f37-4ace-84ca-7999b97b0a2e" path="/var/lib/kubelet/pods/e664472f-6f37-4ace-84ca-7999b97b0a2e/volumes" Oct 11 09:43:25 crc kubenswrapper[5016]: I1011 09:43:25.755250 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hjzvb" podUID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerName="registry-server" containerID="cri-o://06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa" gracePeriod=2 Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.335412 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.533065 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-utilities\") pod \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.533247 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2hjg\" (UniqueName: \"kubernetes.io/projected/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-kube-api-access-v2hjg\") pod \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.533333 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-catalog-content\") pod \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\" (UID: \"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66\") " Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.535462 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-utilities" (OuterVolumeSpecName: "utilities") pod "2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" (UID: "2ea2f498-04ef-4fb2-9bbd-ea467c49ef66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.543728 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-kube-api-access-v2hjg" (OuterVolumeSpecName: "kube-api-access-v2hjg") pod "2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" (UID: "2ea2f498-04ef-4fb2-9bbd-ea467c49ef66"). InnerVolumeSpecName "kube-api-access-v2hjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.636755 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.636807 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2hjg\" (UniqueName: \"kubernetes.io/projected/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-kube-api-access-v2hjg\") on node \"crc\" DevicePath \"\"" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.768740 5016 generic.go:334] "Generic (PLEG): container finished" podID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerID="06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa" exitCode=0 Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.768809 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjzvb" event={"ID":"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66","Type":"ContainerDied","Data":"06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa"} Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.768864 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hjzvb" event={"ID":"2ea2f498-04ef-4fb2-9bbd-ea467c49ef66","Type":"ContainerDied","Data":"6e9779923aa609eab89027a62b9853a5a99b42d120d3ba8b101c3f1cde11a7f5"} Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.768897 5016 scope.go:117] "RemoveContainer" containerID="06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.768891 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hjzvb" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.800892 5016 scope.go:117] "RemoveContainer" containerID="2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.836041 5016 scope.go:117] "RemoveContainer" containerID="cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.908127 5016 scope.go:117] "RemoveContainer" containerID="06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa" Oct 11 09:43:26 crc kubenswrapper[5016]: E1011 09:43:26.914113 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa\": container with ID starting with 06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa not found: ID does not exist" containerID="06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.914164 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa"} err="failed to get container status \"06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa\": rpc error: code = NotFound desc = could not find container \"06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa\": container with ID starting with 06a25474e86db1007c5e095c5e3e7b9c558c134f967c8cb5cf7e758d842f4afa not found: ID does not exist" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.914198 5016 scope.go:117] "RemoveContainer" containerID="2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a" Oct 11 09:43:26 crc kubenswrapper[5016]: E1011 09:43:26.915392 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a\": container with ID starting with 2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a not found: ID does not exist" containerID="2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.915423 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a"} err="failed to get container status \"2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a\": rpc error: code = NotFound desc = could not find container \"2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a\": container with ID starting with 2bc22ef34171ed552f2600053c96fd07589de2fa2b3f9dca10168e5283361c8a not found: ID does not exist" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.915443 5016 scope.go:117] "RemoveContainer" containerID="cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5" Oct 11 09:43:26 crc kubenswrapper[5016]: E1011 09:43:26.915987 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5\": container with ID starting with cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5 not found: ID does not exist" containerID="cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.916072 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5"} err="failed to get container status \"cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5\": rpc error: code = NotFound desc = could not find container \"cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5\": container with ID starting with cb13d9b089d9b1136923ce2c80c8f8beb5c067bc3914e65abbd4fff1febacef5 not found: ID does not exist" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.929784 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" (UID: "2ea2f498-04ef-4fb2-9bbd-ea467c49ef66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:43:26 crc kubenswrapper[5016]: I1011 09:43:26.946378 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:43:27 crc kubenswrapper[5016]: I1011 09:43:27.123158 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hjzvb"] Oct 11 09:43:27 crc kubenswrapper[5016]: I1011 09:43:27.151403 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hjzvb"] Oct 11 09:43:29 crc kubenswrapper[5016]: I1011 09:43:29.155520 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" path="/var/lib/kubelet/pods/2ea2f498-04ef-4fb2-9bbd-ea467c49ef66/volumes" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.161804 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765"] Oct 11 09:45:00 crc kubenswrapper[5016]: E1011 09:45:00.163955 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerName="extract-utilities" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.164016 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerName="extract-utilities" Oct 11 09:45:00 crc kubenswrapper[5016]: E1011 09:45:00.164044 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerName="extract-utilities" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.164055 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerName="extract-utilities" Oct 11 09:45:00 crc kubenswrapper[5016]: E1011 09:45:00.164075 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerName="extract-content" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.164083 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerName="extract-content" Oct 11 09:45:00 crc kubenswrapper[5016]: E1011 09:45:00.164103 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerName="extract-content" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.164111 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerName="extract-content" Oct 11 09:45:00 crc kubenswrapper[5016]: E1011 09:45:00.164128 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerName="registry-server" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.164136 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerName="registry-server" Oct 11 09:45:00 crc kubenswrapper[5016]: E1011 09:45:00.164156 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerName="registry-server" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.164166 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerName="registry-server" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.164399 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="e664472f-6f37-4ace-84ca-7999b97b0a2e" containerName="registry-server" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.164427 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ea2f498-04ef-4fb2-9bbd-ea467c49ef66" containerName="registry-server" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.165521 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.168740 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.168929 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.172798 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765"] Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.326124 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe88db2a-8975-488d-8c19-6b8cc266dfa6-config-volume\") pod \"collect-profiles-29336265-5l765\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.327986 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe88db2a-8975-488d-8c19-6b8cc266dfa6-secret-volume\") pod \"collect-profiles-29336265-5l765\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.328270 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wkxr\" (UniqueName: \"kubernetes.io/projected/fe88db2a-8975-488d-8c19-6b8cc266dfa6-kube-api-access-2wkxr\") pod \"collect-profiles-29336265-5l765\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.433964 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe88db2a-8975-488d-8c19-6b8cc266dfa6-config-volume\") pod \"collect-profiles-29336265-5l765\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.434129 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe88db2a-8975-488d-8c19-6b8cc266dfa6-secret-volume\") pod \"collect-profiles-29336265-5l765\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.434223 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wkxr\" (UniqueName: \"kubernetes.io/projected/fe88db2a-8975-488d-8c19-6b8cc266dfa6-kube-api-access-2wkxr\") pod \"collect-profiles-29336265-5l765\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.435243 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe88db2a-8975-488d-8c19-6b8cc266dfa6-config-volume\") pod \"collect-profiles-29336265-5l765\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.447797 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe88db2a-8975-488d-8c19-6b8cc266dfa6-secret-volume\") pod \"collect-profiles-29336265-5l765\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.455992 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wkxr\" (UniqueName: \"kubernetes.io/projected/fe88db2a-8975-488d-8c19-6b8cc266dfa6-kube-api-access-2wkxr\") pod \"collect-profiles-29336265-5l765\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:00 crc kubenswrapper[5016]: I1011 09:45:00.534412 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:01 crc kubenswrapper[5016]: I1011 09:45:01.000377 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765"] Oct 11 09:45:01 crc kubenswrapper[5016]: W1011 09:45:01.001450 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe88db2a_8975_488d_8c19_6b8cc266dfa6.slice/crio-b307550a7ff94b5b6e0d207bf8ed783b6d620f7da3916efe16715ca03b806a72 WatchSource:0}: Error finding container b307550a7ff94b5b6e0d207bf8ed783b6d620f7da3916efe16715ca03b806a72: Status 404 returned error can't find the container with id b307550a7ff94b5b6e0d207bf8ed783b6d620f7da3916efe16715ca03b806a72 Oct 11 09:45:01 crc kubenswrapper[5016]: I1011 09:45:01.846934 5016 generic.go:334] "Generic (PLEG): container finished" podID="fe88db2a-8975-488d-8c19-6b8cc266dfa6" containerID="27d2fd75c6a722acf83a06d8e6578004c0652e64f2daafa112c2fefd4eb03a8a" exitCode=0 Oct 11 09:45:01 crc kubenswrapper[5016]: I1011 09:45:01.847077 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" event={"ID":"fe88db2a-8975-488d-8c19-6b8cc266dfa6","Type":"ContainerDied","Data":"27d2fd75c6a722acf83a06d8e6578004c0652e64f2daafa112c2fefd4eb03a8a"} Oct 11 09:45:01 crc kubenswrapper[5016]: I1011 09:45:01.847288 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" event={"ID":"fe88db2a-8975-488d-8c19-6b8cc266dfa6","Type":"ContainerStarted","Data":"b307550a7ff94b5b6e0d207bf8ed783b6d620f7da3916efe16715ca03b806a72"} Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.245366 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.421906 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wkxr\" (UniqueName: \"kubernetes.io/projected/fe88db2a-8975-488d-8c19-6b8cc266dfa6-kube-api-access-2wkxr\") pod \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.422124 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe88db2a-8975-488d-8c19-6b8cc266dfa6-secret-volume\") pod \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.423157 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe88db2a-8975-488d-8c19-6b8cc266dfa6-config-volume\") pod \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\" (UID: \"fe88db2a-8975-488d-8c19-6b8cc266dfa6\") " Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.423847 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe88db2a-8975-488d-8c19-6b8cc266dfa6-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe88db2a-8975-488d-8c19-6b8cc266dfa6" (UID: "fe88db2a-8975-488d-8c19-6b8cc266dfa6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.424591 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe88db2a-8975-488d-8c19-6b8cc266dfa6-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.436927 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe88db2a-8975-488d-8c19-6b8cc266dfa6-kube-api-access-2wkxr" (OuterVolumeSpecName: "kube-api-access-2wkxr") pod "fe88db2a-8975-488d-8c19-6b8cc266dfa6" (UID: "fe88db2a-8975-488d-8c19-6b8cc266dfa6"). InnerVolumeSpecName "kube-api-access-2wkxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.436951 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe88db2a-8975-488d-8c19-6b8cc266dfa6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe88db2a-8975-488d-8c19-6b8cc266dfa6" (UID: "fe88db2a-8975-488d-8c19-6b8cc266dfa6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.526555 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wkxr\" (UniqueName: \"kubernetes.io/projected/fe88db2a-8975-488d-8c19-6b8cc266dfa6-kube-api-access-2wkxr\") on node \"crc\" DevicePath \"\"" Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.526600 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe88db2a-8975-488d-8c19-6b8cc266dfa6-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.877021 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" event={"ID":"fe88db2a-8975-488d-8c19-6b8cc266dfa6","Type":"ContainerDied","Data":"b307550a7ff94b5b6e0d207bf8ed783b6d620f7da3916efe16715ca03b806a72"} Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.877071 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b307550a7ff94b5b6e0d207bf8ed783b6d620f7da3916efe16715ca03b806a72" Oct 11 09:45:03 crc kubenswrapper[5016]: I1011 09:45:03.877177 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336265-5l765" Oct 11 09:45:04 crc kubenswrapper[5016]: I1011 09:45:04.336329 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk"] Oct 11 09:45:04 crc kubenswrapper[5016]: I1011 09:45:04.344806 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336220-l2frk"] Oct 11 09:45:05 crc kubenswrapper[5016]: I1011 09:45:05.174844 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dd8f950-c385-47f7-8162-cea724b383e9" path="/var/lib/kubelet/pods/6dd8f950-c385-47f7-8162-cea724b383e9/volumes" Oct 11 09:45:07 crc kubenswrapper[5016]: I1011 09:45:07.123488 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:45:07 crc kubenswrapper[5016]: I1011 09:45:07.124180 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:45:37 crc kubenswrapper[5016]: I1011 09:45:37.123903 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:45:37 crc kubenswrapper[5016]: I1011 09:45:37.125058 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:45:44 crc kubenswrapper[5016]: I1011 09:45:44.770548 5016 scope.go:117] "RemoveContainer" containerID="6c9c4d616942010cdaf0a56c6d4dbe14518af8689ddfd18cf6cae54eff1d476b" Oct 11 09:46:00 crc kubenswrapper[5016]: I1011 09:46:00.773036 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npwpf"] Oct 11 09:46:00 crc kubenswrapper[5016]: E1011 09:46:00.775219 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe88db2a-8975-488d-8c19-6b8cc266dfa6" containerName="collect-profiles" Oct 11 09:46:00 crc kubenswrapper[5016]: I1011 09:46:00.775311 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe88db2a-8975-488d-8c19-6b8cc266dfa6" containerName="collect-profiles" Oct 11 09:46:00 crc kubenswrapper[5016]: I1011 09:46:00.775595 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe88db2a-8975-488d-8c19-6b8cc266dfa6" containerName="collect-profiles" Oct 11 09:46:00 crc kubenswrapper[5016]: I1011 09:46:00.777717 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:00 crc kubenswrapper[5016]: I1011 09:46:00.803283 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npwpf"] Oct 11 09:46:00 crc kubenswrapper[5016]: I1011 09:46:00.911694 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-catalog-content\") pod \"redhat-marketplace-npwpf\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:00 crc kubenswrapper[5016]: I1011 09:46:00.911841 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-utilities\") pod \"redhat-marketplace-npwpf\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:00 crc kubenswrapper[5016]: I1011 09:46:00.911988 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqmnx\" (UniqueName: \"kubernetes.io/projected/76139cab-ef38-4c3b-8ead-b368fda2d41e-kube-api-access-sqmnx\") pod \"redhat-marketplace-npwpf\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:01 crc kubenswrapper[5016]: I1011 09:46:01.015057 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-catalog-content\") pod \"redhat-marketplace-npwpf\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:01 crc kubenswrapper[5016]: I1011 09:46:01.015137 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-utilities\") pod \"redhat-marketplace-npwpf\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:01 crc kubenswrapper[5016]: I1011 09:46:01.015218 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqmnx\" (UniqueName: \"kubernetes.io/projected/76139cab-ef38-4c3b-8ead-b368fda2d41e-kube-api-access-sqmnx\") pod \"redhat-marketplace-npwpf\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:01 crc kubenswrapper[5016]: I1011 09:46:01.016315 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-catalog-content\") pod \"redhat-marketplace-npwpf\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:01 crc kubenswrapper[5016]: I1011 09:46:01.016352 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-utilities\") pod \"redhat-marketplace-npwpf\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:01 crc kubenswrapper[5016]: I1011 09:46:01.042932 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqmnx\" (UniqueName: \"kubernetes.io/projected/76139cab-ef38-4c3b-8ead-b368fda2d41e-kube-api-access-sqmnx\") pod \"redhat-marketplace-npwpf\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:01 crc kubenswrapper[5016]: I1011 09:46:01.108985 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:01 crc kubenswrapper[5016]: I1011 09:46:01.719792 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npwpf"] Oct 11 09:46:02 crc kubenswrapper[5016]: I1011 09:46:02.640439 5016 generic.go:334] "Generic (PLEG): container finished" podID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerID="2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93" exitCode=0 Oct 11 09:46:02 crc kubenswrapper[5016]: I1011 09:46:02.640541 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npwpf" event={"ID":"76139cab-ef38-4c3b-8ead-b368fda2d41e","Type":"ContainerDied","Data":"2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93"} Oct 11 09:46:02 crc kubenswrapper[5016]: I1011 09:46:02.640911 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npwpf" event={"ID":"76139cab-ef38-4c3b-8ead-b368fda2d41e","Type":"ContainerStarted","Data":"8137c45d0f858f38bc7373d10cb23c8d957281ae56b88df35400ba61c90c473f"} Oct 11 09:46:03 crc kubenswrapper[5016]: I1011 09:46:03.655259 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npwpf" event={"ID":"76139cab-ef38-4c3b-8ead-b368fda2d41e","Type":"ContainerStarted","Data":"da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0"} Oct 11 09:46:04 crc kubenswrapper[5016]: I1011 09:46:04.669867 5016 generic.go:334] "Generic (PLEG): container finished" podID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerID="da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0" exitCode=0 Oct 11 09:46:04 crc kubenswrapper[5016]: I1011 09:46:04.669944 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npwpf" event={"ID":"76139cab-ef38-4c3b-8ead-b368fda2d41e","Type":"ContainerDied","Data":"da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0"} Oct 11 09:46:06 crc kubenswrapper[5016]: I1011 09:46:06.709441 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npwpf" event={"ID":"76139cab-ef38-4c3b-8ead-b368fda2d41e","Type":"ContainerStarted","Data":"538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d"} Oct 11 09:46:06 crc kubenswrapper[5016]: I1011 09:46:06.740356 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npwpf" podStartSLOduration=3.914696525 podStartE2EDuration="6.740314574s" podCreationTimestamp="2025-10-11 09:46:00 +0000 UTC" firstStartedPulling="2025-10-11 09:46:02.647292567 +0000 UTC m=+7550.547748553" lastFinishedPulling="2025-10-11 09:46:05.472910636 +0000 UTC m=+7553.373366602" observedRunningTime="2025-10-11 09:46:06.725390067 +0000 UTC m=+7554.625846013" watchObservedRunningTime="2025-10-11 09:46:06.740314574 +0000 UTC m=+7554.640770520" Oct 11 09:46:07 crc kubenswrapper[5016]: I1011 09:46:07.123999 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:46:07 crc kubenswrapper[5016]: I1011 09:46:07.124105 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:46:07 crc kubenswrapper[5016]: I1011 09:46:07.124178 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:46:07 crc kubenswrapper[5016]: I1011 09:46:07.125593 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f48a1030019a6b6a35bcdf3b180215d9d8e0d5b3e7e072fed5d760a39b504042"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:46:07 crc kubenswrapper[5016]: I1011 09:46:07.125737 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://f48a1030019a6b6a35bcdf3b180215d9d8e0d5b3e7e072fed5d760a39b504042" gracePeriod=600 Oct 11 09:46:07 crc kubenswrapper[5016]: I1011 09:46:07.728592 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="f48a1030019a6b6a35bcdf3b180215d9d8e0d5b3e7e072fed5d760a39b504042" exitCode=0 Oct 11 09:46:07 crc kubenswrapper[5016]: I1011 09:46:07.729636 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"f48a1030019a6b6a35bcdf3b180215d9d8e0d5b3e7e072fed5d760a39b504042"} Oct 11 09:46:07 crc kubenswrapper[5016]: I1011 09:46:07.729974 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce"} Oct 11 09:46:07 crc kubenswrapper[5016]: I1011 09:46:07.729997 5016 scope.go:117] "RemoveContainer" containerID="82804cf5928ba4fb08dc0d3df03801489476f3a9b6325cfc8efcb14fdfe88dae" Oct 11 09:46:11 crc kubenswrapper[5016]: I1011 09:46:11.109275 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:11 crc kubenswrapper[5016]: I1011 09:46:11.110520 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:11 crc kubenswrapper[5016]: I1011 09:46:11.201809 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:11 crc kubenswrapper[5016]: I1011 09:46:11.851957 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:11 crc kubenswrapper[5016]: I1011 09:46:11.910649 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npwpf"] Oct 11 09:46:13 crc kubenswrapper[5016]: I1011 09:46:13.804536 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-npwpf" podUID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerName="registry-server" containerID="cri-o://538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d" gracePeriod=2 Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.372840 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.521554 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqmnx\" (UniqueName: \"kubernetes.io/projected/76139cab-ef38-4c3b-8ead-b368fda2d41e-kube-api-access-sqmnx\") pod \"76139cab-ef38-4c3b-8ead-b368fda2d41e\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.521924 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-catalog-content\") pod \"76139cab-ef38-4c3b-8ead-b368fda2d41e\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.523912 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-utilities\") pod \"76139cab-ef38-4c3b-8ead-b368fda2d41e\" (UID: \"76139cab-ef38-4c3b-8ead-b368fda2d41e\") " Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.526010 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-utilities" (OuterVolumeSpecName: "utilities") pod "76139cab-ef38-4c3b-8ead-b368fda2d41e" (UID: "76139cab-ef38-4c3b-8ead-b368fda2d41e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.534023 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76139cab-ef38-4c3b-8ead-b368fda2d41e-kube-api-access-sqmnx" (OuterVolumeSpecName: "kube-api-access-sqmnx") pod "76139cab-ef38-4c3b-8ead-b368fda2d41e" (UID: "76139cab-ef38-4c3b-8ead-b368fda2d41e"). InnerVolumeSpecName "kube-api-access-sqmnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.547552 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76139cab-ef38-4c3b-8ead-b368fda2d41e" (UID: "76139cab-ef38-4c3b-8ead-b368fda2d41e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.628512 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.628565 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76139cab-ef38-4c3b-8ead-b368fda2d41e-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.628581 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqmnx\" (UniqueName: \"kubernetes.io/projected/76139cab-ef38-4c3b-8ead-b368fda2d41e-kube-api-access-sqmnx\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.824267 5016 generic.go:334] "Generic (PLEG): container finished" podID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerID="538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d" exitCode=0 Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.824336 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npwpf" event={"ID":"76139cab-ef38-4c3b-8ead-b368fda2d41e","Type":"ContainerDied","Data":"538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d"} Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.824382 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npwpf" event={"ID":"76139cab-ef38-4c3b-8ead-b368fda2d41e","Type":"ContainerDied","Data":"8137c45d0f858f38bc7373d10cb23c8d957281ae56b88df35400ba61c90c473f"} Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.824414 5016 scope.go:117] "RemoveContainer" containerID="538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.824446 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npwpf" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.856451 5016 scope.go:117] "RemoveContainer" containerID="da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.899157 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npwpf"] Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.910761 5016 scope.go:117] "RemoveContainer" containerID="2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.912076 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-npwpf"] Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.961067 5016 scope.go:117] "RemoveContainer" containerID="538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d" Oct 11 09:46:14 crc kubenswrapper[5016]: E1011 09:46:14.962142 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d\": container with ID starting with 538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d not found: ID does not exist" containerID="538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.962250 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d"} err="failed to get container status \"538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d\": rpc error: code = NotFound desc = could not find container \"538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d\": container with ID starting with 538ec3a51aab91c60f55671f77bbde6be88056d489c50b56a69e91d9361cc90d not found: ID does not exist" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.962328 5016 scope.go:117] "RemoveContainer" containerID="da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0" Oct 11 09:46:14 crc kubenswrapper[5016]: E1011 09:46:14.963091 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0\": container with ID starting with da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0 not found: ID does not exist" containerID="da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.963160 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0"} err="failed to get container status \"da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0\": rpc error: code = NotFound desc = could not find container \"da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0\": container with ID starting with da0ccd0859ef8e414af8cd2ca69fe67167c8459c4008f9b225b6ccb63fe8c9d0 not found: ID does not exist" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.963201 5016 scope.go:117] "RemoveContainer" containerID="2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93" Oct 11 09:46:14 crc kubenswrapper[5016]: E1011 09:46:14.963886 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93\": container with ID starting with 2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93 not found: ID does not exist" containerID="2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93" Oct 11 09:46:14 crc kubenswrapper[5016]: I1011 09:46:14.963919 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93"} err="failed to get container status \"2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93\": rpc error: code = NotFound desc = could not find container \"2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93\": container with ID starting with 2d2016784ba65cb1c00df93f9a343182915b815498eb2e321dc20efc57f5fa93 not found: ID does not exist" Oct 11 09:46:15 crc kubenswrapper[5016]: I1011 09:46:15.183908 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76139cab-ef38-4c3b-8ead-b368fda2d41e" path="/var/lib/kubelet/pods/76139cab-ef38-4c3b-8ead-b368fda2d41e/volumes" Oct 11 09:46:32 crc kubenswrapper[5016]: I1011 09:46:32.056721 5016 generic.go:334] "Generic (PLEG): container finished" podID="19930010-7a7e-4c76-a81e-85e049ff1da4" containerID="07a0f3da5771f39f1842974593d044bc792163238e72657d8f45094d41ace8af" exitCode=1 Oct 11 09:46:32 crc kubenswrapper[5016]: I1011 09:46:32.056782 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"19930010-7a7e-4c76-a81e-85e049ff1da4","Type":"ContainerDied","Data":"07a0f3da5771f39f1842974593d044bc792163238e72657d8f45094d41ace8af"} Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.909139 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.986018 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Oct 11 09:46:33 crc kubenswrapper[5016]: E1011 09:46:33.986474 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerName="registry-server" Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.986495 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerName="registry-server" Oct 11 09:46:33 crc kubenswrapper[5016]: E1011 09:46:33.986522 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerName="extract-utilities" Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.986530 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerName="extract-utilities" Oct 11 09:46:33 crc kubenswrapper[5016]: E1011 09:46:33.986550 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerName="extract-content" Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.986557 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerName="extract-content" Oct 11 09:46:33 crc kubenswrapper[5016]: E1011 09:46:33.986568 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19930010-7a7e-4c76-a81e-85e049ff1da4" containerName="tempest-tests-tempest-tests-runner" Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.986577 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="19930010-7a7e-4c76-a81e-85e049ff1da4" containerName="tempest-tests-tempest-tests-runner" Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.986867 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="19930010-7a7e-4c76-a81e-85e049ff1da4" containerName="tempest-tests-tempest-tests-runner" Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.986908 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="76139cab-ef38-4c3b-8ead-b368fda2d41e" containerName="registry-server" Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.988141 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:33 crc kubenswrapper[5016]: I1011 09:46:33.991052 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.001149 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.001505 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.046525 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-workdir\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.046593 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.046637 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config-secret\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.046718 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-config-data\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.046778 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ssh-key\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.046826 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-temporary\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.046869 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ca-certs\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.046927 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqvf8\" (UniqueName: \"kubernetes.io/projected/19930010-7a7e-4c76-a81e-85e049ff1da4-kube-api-access-gqvf8\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.046964 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.047066 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ceph\") pod \"19930010-7a7e-4c76-a81e-85e049ff1da4\" (UID: \"19930010-7a7e-4c76-a81e-85e049ff1da4\") " Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.048443 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.048741 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-config-data" (OuterVolumeSpecName: "config-data") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.055287 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.055566 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19930010-7a7e-4c76-a81e-85e049ff1da4-kube-api-access-gqvf8" (OuterVolumeSpecName: "kube-api-access-gqvf8") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "kube-api-access-gqvf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.055933 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ceph" (OuterVolumeSpecName: "ceph") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.057312 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.085468 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"19930010-7a7e-4c76-a81e-85e049ff1da4","Type":"ContainerDied","Data":"d784512daa6d309669323858354c833f4d25ce4f8c096eff33a2693b6b37a175"} Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.085543 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d784512daa6d309669323858354c833f4d25ce4f8c096eff33a2693b6b37a175" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.085564 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.086206 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.086401 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.097608 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.128565 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "19930010-7a7e-4c76-a81e-85e049ff1da4" (UID: "19930010-7a7e-4c76-a81e-85e049ff1da4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149203 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149286 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149375 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149404 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149428 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149451 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149526 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcz4\" (UniqueName: \"kubernetes.io/projected/fea8725f-5064-485b-8c4a-7992b2800394-kube-api-access-xgcz4\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149562 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149608 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149634 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149699 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149716 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149725 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149737 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149749 5016 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149885 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqvf8\" (UniqueName: \"kubernetes.io/projected/19930010-7a7e-4c76-a81e-85e049ff1da4-kube-api-access-gqvf8\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149918 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/19930010-7a7e-4c76-a81e-85e049ff1da4-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149930 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/19930010-7a7e-4c76-a81e-85e049ff1da4-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.149944 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/19930010-7a7e-4c76-a81e-85e049ff1da4-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.178052 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.251505 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.251919 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.252103 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.252264 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.252545 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.253236 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.252904 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.252751 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.253400 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.253793 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgcz4\" (UniqueName: \"kubernetes.io/projected/fea8725f-5064-485b-8c4a-7992b2800394-kube-api-access-xgcz4\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.254294 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.254315 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.254539 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.255317 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.256311 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.257365 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.258450 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.272885 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgcz4\" (UniqueName: \"kubernetes.io/projected/fea8725f-5064-485b-8c4a-7992b2800394-kube-api-access-xgcz4\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.305472 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:46:34 crc kubenswrapper[5016]: I1011 09:46:34.897073 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Oct 11 09:46:35 crc kubenswrapper[5016]: I1011 09:46:35.099953 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"fea8725f-5064-485b-8c4a-7992b2800394","Type":"ContainerStarted","Data":"dcbc5ec339add0b362f3f025d0c8b915ecaafd1678f32f6680a0cc3ff26106b9"} Oct 11 09:46:37 crc kubenswrapper[5016]: I1011 09:46:37.127864 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"fea8725f-5064-485b-8c4a-7992b2800394","Type":"ContainerStarted","Data":"3e8f72df019f2cebead275395fab94a22252d8c288bb265cf59707440be69e5b"} Oct 11 09:48:07 crc kubenswrapper[5016]: I1011 09:48:07.122106 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:48:07 crc kubenswrapper[5016]: I1011 09:48:07.122866 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:48:37 crc kubenswrapper[5016]: I1011 09:48:37.122673 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:48:37 crc kubenswrapper[5016]: I1011 09:48:37.123586 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.122696 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.124413 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.124535 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.125643 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.125834 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" gracePeriod=600 Oct 11 09:49:07 crc kubenswrapper[5016]: E1011 09:49:07.262552 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.904823 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" exitCode=0 Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.904922 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce"} Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.905341 5016 scope.go:117] "RemoveContainer" containerID="f48a1030019a6b6a35bcdf3b180215d9d8e0d5b3e7e072fed5d760a39b504042" Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.906724 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:49:07 crc kubenswrapper[5016]: E1011 09:49:07.907124 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:49:07 crc kubenswrapper[5016]: I1011 09:49:07.942850 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-test" podStartSLOduration=154.942826018 podStartE2EDuration="2m34.942826018s" podCreationTimestamp="2025-10-11 09:46:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 09:46:37.161198059 +0000 UTC m=+7585.061654025" watchObservedRunningTime="2025-10-11 09:49:07.942826018 +0000 UTC m=+7735.843281974" Oct 11 09:49:18 crc kubenswrapper[5016]: I1011 09:49:18.134200 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:49:18 crc kubenswrapper[5016]: E1011 09:49:18.135335 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:49:29 crc kubenswrapper[5016]: I1011 09:49:29.134135 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:49:29 crc kubenswrapper[5016]: E1011 09:49:29.135448 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:49:41 crc kubenswrapper[5016]: I1011 09:49:41.133968 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:49:41 crc kubenswrapper[5016]: E1011 09:49:41.134680 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:49:52 crc kubenswrapper[5016]: I1011 09:49:52.133546 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:49:52 crc kubenswrapper[5016]: E1011 09:49:52.135644 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:50:04 crc kubenswrapper[5016]: I1011 09:50:04.133804 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:50:04 crc kubenswrapper[5016]: E1011 09:50:04.135889 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:50:15 crc kubenswrapper[5016]: I1011 09:50:15.134794 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:50:15 crc kubenswrapper[5016]: E1011 09:50:15.138146 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:50:29 crc kubenswrapper[5016]: I1011 09:50:29.133756 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:50:29 crc kubenswrapper[5016]: E1011 09:50:29.134808 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:50:43 crc kubenswrapper[5016]: I1011 09:50:43.142986 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:50:43 crc kubenswrapper[5016]: E1011 09:50:43.143971 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.134346 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:50:58 crc kubenswrapper[5016]: E1011 09:50:58.135428 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.185082 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9mzvs"] Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.187723 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.206684 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9mzvs"] Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.336590 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-catalog-content\") pod \"redhat-operators-9mzvs\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.337293 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l8kn\" (UniqueName: \"kubernetes.io/projected/c548788e-a0de-4113-b361-a80f1e34c528-kube-api-access-9l8kn\") pod \"redhat-operators-9mzvs\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.337445 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-utilities\") pod \"redhat-operators-9mzvs\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.438743 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l8kn\" (UniqueName: \"kubernetes.io/projected/c548788e-a0de-4113-b361-a80f1e34c528-kube-api-access-9l8kn\") pod \"redhat-operators-9mzvs\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.438842 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-utilities\") pod \"redhat-operators-9mzvs\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.438943 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-catalog-content\") pod \"redhat-operators-9mzvs\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.439498 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-catalog-content\") pod \"redhat-operators-9mzvs\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.439646 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-utilities\") pod \"redhat-operators-9mzvs\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.467976 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l8kn\" (UniqueName: \"kubernetes.io/projected/c548788e-a0de-4113-b361-a80f1e34c528-kube-api-access-9l8kn\") pod \"redhat-operators-9mzvs\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:58 crc kubenswrapper[5016]: I1011 09:50:58.546238 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:50:59 crc kubenswrapper[5016]: I1011 09:50:59.074346 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9mzvs"] Oct 11 09:50:59 crc kubenswrapper[5016]: I1011 09:50:59.104823 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mzvs" event={"ID":"c548788e-a0de-4113-b361-a80f1e34c528","Type":"ContainerStarted","Data":"d190edbde7bd3c6af7db7f829f62932407889d2d4edc48c0ff3e693a1aa61f44"} Oct 11 09:51:00 crc kubenswrapper[5016]: I1011 09:51:00.115194 5016 generic.go:334] "Generic (PLEG): container finished" podID="c548788e-a0de-4113-b361-a80f1e34c528" containerID="bc560cc59a7a01a5223411200ba0b7ac2a6e8823c06f43a530e143a19eed481f" exitCode=0 Oct 11 09:51:00 crc kubenswrapper[5016]: I1011 09:51:00.115286 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mzvs" event={"ID":"c548788e-a0de-4113-b361-a80f1e34c528","Type":"ContainerDied","Data":"bc560cc59a7a01a5223411200ba0b7ac2a6e8823c06f43a530e143a19eed481f"} Oct 11 09:51:00 crc kubenswrapper[5016]: I1011 09:51:00.117994 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 09:51:01 crc kubenswrapper[5016]: I1011 09:51:01.130732 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mzvs" event={"ID":"c548788e-a0de-4113-b361-a80f1e34c528","Type":"ContainerStarted","Data":"b9e1a0d48386da3006fc61971a3a22bcbe375152d1b0ca9adc705252dcad330d"} Oct 11 09:51:02 crc kubenswrapper[5016]: I1011 09:51:02.143571 5016 generic.go:334] "Generic (PLEG): container finished" podID="c548788e-a0de-4113-b361-a80f1e34c528" containerID="b9e1a0d48386da3006fc61971a3a22bcbe375152d1b0ca9adc705252dcad330d" exitCode=0 Oct 11 09:51:02 crc kubenswrapper[5016]: I1011 09:51:02.143647 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mzvs" event={"ID":"c548788e-a0de-4113-b361-a80f1e34c528","Type":"ContainerDied","Data":"b9e1a0d48386da3006fc61971a3a22bcbe375152d1b0ca9adc705252dcad330d"} Oct 11 09:51:03 crc kubenswrapper[5016]: I1011 09:51:03.155345 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mzvs" event={"ID":"c548788e-a0de-4113-b361-a80f1e34c528","Type":"ContainerStarted","Data":"3fb6d7577ab820bf5907f056a0c4f33062ff162c2520661f333e29009a3fac70"} Oct 11 09:51:03 crc kubenswrapper[5016]: I1011 09:51:03.183309 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9mzvs" podStartSLOduration=2.649354015 podStartE2EDuration="5.183271179s" podCreationTimestamp="2025-10-11 09:50:58 +0000 UTC" firstStartedPulling="2025-10-11 09:51:00.117723027 +0000 UTC m=+7848.018178973" lastFinishedPulling="2025-10-11 09:51:02.651640201 +0000 UTC m=+7850.552096137" observedRunningTime="2025-10-11 09:51:03.172034011 +0000 UTC m=+7851.072489967" watchObservedRunningTime="2025-10-11 09:51:03.183271179 +0000 UTC m=+7851.083727195" Oct 11 09:51:08 crc kubenswrapper[5016]: I1011 09:51:08.547328 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:51:08 crc kubenswrapper[5016]: I1011 09:51:08.547808 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:51:08 crc kubenswrapper[5016]: I1011 09:51:08.631381 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:51:09 crc kubenswrapper[5016]: I1011 09:51:09.134512 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:51:09 crc kubenswrapper[5016]: E1011 09:51:09.134897 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:51:09 crc kubenswrapper[5016]: I1011 09:51:09.294007 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:51:09 crc kubenswrapper[5016]: I1011 09:51:09.361166 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9mzvs"] Oct 11 09:51:11 crc kubenswrapper[5016]: I1011 09:51:11.243328 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9mzvs" podUID="c548788e-a0de-4113-b361-a80f1e34c528" containerName="registry-server" containerID="cri-o://3fb6d7577ab820bf5907f056a0c4f33062ff162c2520661f333e29009a3fac70" gracePeriod=2 Oct 11 09:51:12 crc kubenswrapper[5016]: I1011 09:51:12.260627 5016 generic.go:334] "Generic (PLEG): container finished" podID="c548788e-a0de-4113-b361-a80f1e34c528" containerID="3fb6d7577ab820bf5907f056a0c4f33062ff162c2520661f333e29009a3fac70" exitCode=0 Oct 11 09:51:12 crc kubenswrapper[5016]: I1011 09:51:12.260734 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mzvs" event={"ID":"c548788e-a0de-4113-b361-a80f1e34c528","Type":"ContainerDied","Data":"3fb6d7577ab820bf5907f056a0c4f33062ff162c2520661f333e29009a3fac70"} Oct 11 09:51:12 crc kubenswrapper[5016]: I1011 09:51:12.881274 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.042913 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-catalog-content\") pod \"c548788e-a0de-4113-b361-a80f1e34c528\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.043239 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l8kn\" (UniqueName: \"kubernetes.io/projected/c548788e-a0de-4113-b361-a80f1e34c528-kube-api-access-9l8kn\") pod \"c548788e-a0de-4113-b361-a80f1e34c528\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.043338 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-utilities\") pod \"c548788e-a0de-4113-b361-a80f1e34c528\" (UID: \"c548788e-a0de-4113-b361-a80f1e34c528\") " Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.044180 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-utilities" (OuterVolumeSpecName: "utilities") pod "c548788e-a0de-4113-b361-a80f1e34c528" (UID: "c548788e-a0de-4113-b361-a80f1e34c528"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.059605 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c548788e-a0de-4113-b361-a80f1e34c528-kube-api-access-9l8kn" (OuterVolumeSpecName: "kube-api-access-9l8kn") pod "c548788e-a0de-4113-b361-a80f1e34c528" (UID: "c548788e-a0de-4113-b361-a80f1e34c528"). InnerVolumeSpecName "kube-api-access-9l8kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.146181 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l8kn\" (UniqueName: \"kubernetes.io/projected/c548788e-a0de-4113-b361-a80f1e34c528-kube-api-access-9l8kn\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.146229 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.153460 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c548788e-a0de-4113-b361-a80f1e34c528" (UID: "c548788e-a0de-4113-b361-a80f1e34c528"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.248479 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c548788e-a0de-4113-b361-a80f1e34c528-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.275052 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9mzvs" event={"ID":"c548788e-a0de-4113-b361-a80f1e34c528","Type":"ContainerDied","Data":"d190edbde7bd3c6af7db7f829f62932407889d2d4edc48c0ff3e693a1aa61f44"} Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.276573 5016 scope.go:117] "RemoveContainer" containerID="3fb6d7577ab820bf5907f056a0c4f33062ff162c2520661f333e29009a3fac70" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.275335 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9mzvs" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.339496 5016 scope.go:117] "RemoveContainer" containerID="b9e1a0d48386da3006fc61971a3a22bcbe375152d1b0ca9adc705252dcad330d" Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.352276 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9mzvs"] Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.369622 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9mzvs"] Oct 11 09:51:13 crc kubenswrapper[5016]: I1011 09:51:13.380885 5016 scope.go:117] "RemoveContainer" containerID="bc560cc59a7a01a5223411200ba0b7ac2a6e8823c06f43a530e143a19eed481f" Oct 11 09:51:15 crc kubenswrapper[5016]: I1011 09:51:15.147020 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c548788e-a0de-4113-b361-a80f1e34c528" path="/var/lib/kubelet/pods/c548788e-a0de-4113-b361-a80f1e34c528/volumes" Oct 11 09:51:15 crc kubenswrapper[5016]: I1011 09:51:15.301644 5016 generic.go:334] "Generic (PLEG): container finished" podID="fea8725f-5064-485b-8c4a-7992b2800394" containerID="3e8f72df019f2cebead275395fab94a22252d8c288bb265cf59707440be69e5b" exitCode=0 Oct 11 09:51:15 crc kubenswrapper[5016]: I1011 09:51:15.301720 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"fea8725f-5064-485b-8c4a-7992b2800394","Type":"ContainerDied","Data":"3e8f72df019f2cebead275395fab94a22252d8c288bb265cf59707440be69e5b"} Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.811128 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.944127 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.944319 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.944386 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-workdir\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.944453 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-temporary\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.944716 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-config-data\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.944766 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ssh-key\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.944827 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ca-certs\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.944885 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config-secret\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.945920 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-config-data" (OuterVolumeSpecName: "config-data") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.946048 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgcz4\" (UniqueName: \"kubernetes.io/projected/fea8725f-5064-485b-8c4a-7992b2800394-kube-api-access-xgcz4\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.946121 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ceph\") pod \"fea8725f-5064-485b-8c4a-7992b2800394\" (UID: \"fea8725f-5064-485b-8c4a-7992b2800394\") " Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.946804 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.947090 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.947118 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.952834 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ceph" (OuterVolumeSpecName: "ceph") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.953096 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.953873 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea8725f-5064-485b-8c4a-7992b2800394-kube-api-access-xgcz4" (OuterVolumeSpecName: "kube-api-access-xgcz4") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "kube-api-access-xgcz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.956478 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.983965 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:51:16 crc kubenswrapper[5016]: I1011 09:51:16.990631 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.018944 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.030094 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fea8725f-5064-485b-8c4a-7992b2800394" (UID: "fea8725f-5064-485b-8c4a-7992b2800394"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.049321 5016 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.049384 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.049403 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgcz4\" (UniqueName: \"kubernetes.io/projected/fea8725f-5064-485b-8c4a-7992b2800394-kube-api-access-xgcz4\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.049423 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.049487 5016 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.049506 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fea8725f-5064-485b-8c4a-7992b2800394-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.049523 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fea8725f-5064-485b-8c4a-7992b2800394-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.049536 5016 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fea8725f-5064-485b-8c4a-7992b2800394-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.082469 5016 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.151422 5016 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.328522 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"fea8725f-5064-485b-8c4a-7992b2800394","Type":"ContainerDied","Data":"dcbc5ec339add0b362f3f025d0c8b915ecaafd1678f32f6680a0cc3ff26106b9"} Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.328597 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcbc5ec339add0b362f3f025d0c8b915ecaafd1678f32f6680a0cc3ff26106b9" Oct 11 09:51:17 crc kubenswrapper[5016]: I1011 09:51:17.328701 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Oct 11 09:51:24 crc kubenswrapper[5016]: I1011 09:51:24.134235 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:51:24 crc kubenswrapper[5016]: E1011 09:51:24.135323 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.037342 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 11 09:51:26 crc kubenswrapper[5016]: E1011 09:51:26.038104 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c548788e-a0de-4113-b361-a80f1e34c528" containerName="registry-server" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.038117 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c548788e-a0de-4113-b361-a80f1e34c528" containerName="registry-server" Oct 11 09:51:26 crc kubenswrapper[5016]: E1011 09:51:26.038136 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c548788e-a0de-4113-b361-a80f1e34c528" containerName="extract-content" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.038142 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c548788e-a0de-4113-b361-a80f1e34c528" containerName="extract-content" Oct 11 09:51:26 crc kubenswrapper[5016]: E1011 09:51:26.038169 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fea8725f-5064-485b-8c4a-7992b2800394" containerName="tempest-tests-tempest-tests-runner" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.038178 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="fea8725f-5064-485b-8c4a-7992b2800394" containerName="tempest-tests-tempest-tests-runner" Oct 11 09:51:26 crc kubenswrapper[5016]: E1011 09:51:26.038199 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c548788e-a0de-4113-b361-a80f1e34c528" containerName="extract-utilities" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.038205 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c548788e-a0de-4113-b361-a80f1e34c528" containerName="extract-utilities" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.038395 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="fea8725f-5064-485b-8c4a-7992b2800394" containerName="tempest-tests-tempest-tests-runner" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.038421 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c548788e-a0de-4113-b361-a80f1e34c528" containerName="registry-server" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.039177 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.045813 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kgfkw" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.051093 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.193365 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp8tc\" (UniqueName: \"kubernetes.io/projected/413f979d-3cc5-4ecf-bbe0-6464cd03ecde-kube-api-access-sp8tc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"413f979d-3cc5-4ecf-bbe0-6464cd03ecde\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.194160 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"413f979d-3cc5-4ecf-bbe0-6464cd03ecde\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.297470 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp8tc\" (UniqueName: \"kubernetes.io/projected/413f979d-3cc5-4ecf-bbe0-6464cd03ecde-kube-api-access-sp8tc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"413f979d-3cc5-4ecf-bbe0-6464cd03ecde\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.297531 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"413f979d-3cc5-4ecf-bbe0-6464cd03ecde\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.298256 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"413f979d-3cc5-4ecf-bbe0-6464cd03ecde\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.320790 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp8tc\" (UniqueName: \"kubernetes.io/projected/413f979d-3cc5-4ecf-bbe0-6464cd03ecde-kube-api-access-sp8tc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"413f979d-3cc5-4ecf-bbe0-6464cd03ecde\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.322725 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"413f979d-3cc5-4ecf-bbe0-6464cd03ecde\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.379386 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 11 09:51:26 crc kubenswrapper[5016]: I1011 09:51:26.944602 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 11 09:51:27 crc kubenswrapper[5016]: I1011 09:51:27.476030 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"413f979d-3cc5-4ecf-bbe0-6464cd03ecde","Type":"ContainerStarted","Data":"6b06cfb19853ba3f8720a3b7ee120e514ef6f3f5f7fe39ebdf3dca98fc9424eb"} Oct 11 09:51:28 crc kubenswrapper[5016]: I1011 09:51:28.492959 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"413f979d-3cc5-4ecf-bbe0-6464cd03ecde","Type":"ContainerStarted","Data":"ba2ec411673a2e8a6632f174e2c7dbdbe96cdfc5f8c52381a4b27f04af7254aa"} Oct 11 09:51:28 crc kubenswrapper[5016]: I1011 09:51:28.515319 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.632380048 podStartE2EDuration="2.515291579s" podCreationTimestamp="2025-10-11 09:51:26 +0000 UTC" firstStartedPulling="2025-10-11 09:51:26.938534635 +0000 UTC m=+7874.838990591" lastFinishedPulling="2025-10-11 09:51:27.821446176 +0000 UTC m=+7875.721902122" observedRunningTime="2025-10-11 09:51:28.510327897 +0000 UTC m=+7876.410783843" watchObservedRunningTime="2025-10-11 09:51:28.515291579 +0000 UTC m=+7876.415747525" Oct 11 09:51:38 crc kubenswrapper[5016]: I1011 09:51:38.136138 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:51:38 crc kubenswrapper[5016]: E1011 09:51:38.137855 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.133453 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:51:50 crc kubenswrapper[5016]: E1011 09:51:50.134311 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.438913 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.440842 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.443904 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.444440 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-config" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.444562 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-public-key" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.450316 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"tobiko-secret" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.450353 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-private-key" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.458974 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.516438 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.516521 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.516743 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.618973 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtsp7\" (UniqueName: \"kubernetes.io/projected/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kube-api-access-xtsp7\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.619036 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.619177 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.619236 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.619442 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.619617 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.619797 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.619839 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.619947 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.620124 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.620136 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.620202 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.620301 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.620333 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.627116 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.722986 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723039 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723119 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtsp7\" (UniqueName: \"kubernetes.io/projected/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kube-api-access-xtsp7\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723167 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723211 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723257 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723309 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723334 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723415 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723712 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.723956 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.724157 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.724229 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.724580 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.728815 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.729427 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.729945 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.744404 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtsp7\" (UniqueName: \"kubernetes.io/projected/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kube-api-access-xtsp7\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.764313 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:50 crc kubenswrapper[5016]: I1011 09:51:50.783239 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:51:51 crc kubenswrapper[5016]: I1011 09:51:51.334483 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Oct 11 09:51:51 crc kubenswrapper[5016]: I1011 09:51:51.791012 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff","Type":"ContainerStarted","Data":"8e8efb7da910eb29ae2f086fdcd9c2a990fc631a880df189e2eb2575af692f74"} Oct 11 09:52:04 crc kubenswrapper[5016]: I1011 09:52:04.134712 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:52:04 crc kubenswrapper[5016]: E1011 09:52:04.135860 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:52:08 crc kubenswrapper[5016]: E1011 09:52:08.014417 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tobiko:current-podified" Oct 11 09:52:08 crc kubenswrapper[5016]: E1011 09:52:08.015585 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tobiko-tests-tobiko,Image:quay.io/podified-antelope-centos9/openstack-tobiko:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TOBIKO_DEBUG_MODE,Value:false,ValueFrom:nil,},EnvVar{Name:TOBIKO_KEYS_FOLDER,Value:/etc/test_operator,ValueFrom:nil,},EnvVar{Name:TOBIKO_LOGS_DIR_NAME,Value:tobiko-tests-tobiko-s00-podified-functional,ValueFrom:nil,},EnvVar{Name:TOBIKO_PYTEST_ADDOPTS,Value:,ValueFrom:nil,},EnvVar{Name:TOBIKO_TESTENV,Value:functional -- tobiko/tests/functional/podified/test_topology.py,ValueFrom:nil,},EnvVar{Name:TOBIKO_VERSION,Value:master,ValueFrom:nil,},EnvVar{Name:TOX_NUM_PROCESSES,Value:2,ValueFrom:nil,},EnvVar{Name:USE_EXTERNAL_FILES,Value:True,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{8 0} {} 8 DecimalSI},memory: {{8589934592 0} {} BinarySI},},Requests:ResourceList{cpu: {{4 0} {} 4 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tobiko,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tobiko/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/var/lib/tobiko/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tobiko-config,ReadOnly:false,MountPath:/etc/tobiko/tobiko.conf,SubPath:tobiko.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tobiko-private-key,ReadOnly:true,MountPath:/etc/test_operator/id_ecdsa,SubPath:id_ecdsa,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tobiko-public-key,ReadOnly:true,MountPath:/etc/test_operator/id_ecdsa.pub,SubPath:id_ecdsa.pub,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/var/lib/tobiko/.kube/config,SubPath:config,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42495,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42495,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tobiko-tests-tobiko-s00-podified-functional_openstack(0048ddc1-30d3-4acd-8fb4-e84a2eeefcff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 09:52:08 crc kubenswrapper[5016]: E1011 09:52:08.017630 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tobiko-tests-tobiko\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podUID="0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" Oct 11 09:52:08 crc kubenswrapper[5016]: E1011 09:52:08.964477 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tobiko-tests-tobiko\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tobiko:current-podified\\\"\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podUID="0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" Oct 11 09:52:15 crc kubenswrapper[5016]: I1011 09:52:15.133763 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:52:15 crc kubenswrapper[5016]: E1011 09:52:15.135363 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:52:25 crc kubenswrapper[5016]: I1011 09:52:25.154189 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff","Type":"ContainerStarted","Data":"2d797554214cf0c02681472fec9ccc089e31d21d24e0f7332a776991af0f903a"} Oct 11 09:52:25 crc kubenswrapper[5016]: I1011 09:52:25.195307 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podStartSLOduration=3.848949064 podStartE2EDuration="36.195241793s" podCreationTimestamp="2025-10-11 09:51:49 +0000 UTC" firstStartedPulling="2025-10-11 09:51:51.339835346 +0000 UTC m=+7899.240291282" lastFinishedPulling="2025-10-11 09:52:23.686128045 +0000 UTC m=+7931.586584011" observedRunningTime="2025-10-11 09:52:25.187135348 +0000 UTC m=+7933.087591334" watchObservedRunningTime="2025-10-11 09:52:25.195241793 +0000 UTC m=+7933.095697789" Oct 11 09:52:29 crc kubenswrapper[5016]: I1011 09:52:29.135796 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:52:29 crc kubenswrapper[5016]: E1011 09:52:29.137549 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:52:40 crc kubenswrapper[5016]: I1011 09:52:40.133844 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:52:40 crc kubenswrapper[5016]: E1011 09:52:40.138138 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:52:51 crc kubenswrapper[5016]: I1011 09:52:51.133772 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:52:51 crc kubenswrapper[5016]: E1011 09:52:51.134643 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:53:06 crc kubenswrapper[5016]: I1011 09:53:06.134231 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:53:06 crc kubenswrapper[5016]: E1011 09:53:06.135587 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:53:17 crc kubenswrapper[5016]: I1011 09:53:17.141718 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:53:17 crc kubenswrapper[5016]: E1011 09:53:17.143167 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:53:29 crc kubenswrapper[5016]: I1011 09:53:29.133913 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:53:29 crc kubenswrapper[5016]: E1011 09:53:29.134818 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:53:41 crc kubenswrapper[5016]: I1011 09:53:41.163957 5016 generic.go:334] "Generic (PLEG): container finished" podID="0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" containerID="2d797554214cf0c02681472fec9ccc089e31d21d24e0f7332a776991af0f903a" exitCode=0 Oct 11 09:53:41 crc kubenswrapper[5016]: I1011 09:53:41.164202 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff","Type":"ContainerDied","Data":"2d797554214cf0c02681472fec9ccc089e31d21d24e0f7332a776991af0f903a"} Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.761057 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.837152 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Oct 11 09:53:42 crc kubenswrapper[5016]: E1011 09:53:42.838128 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" containerName="tobiko-tests-tobiko" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.838164 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" containerName="tobiko-tests-tobiko" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.838538 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" containerName="tobiko-tests-tobiko" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.839952 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.855751 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.933993 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-config\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934052 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-private-key\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934209 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934257 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-workdir\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934349 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-clouds-config\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934389 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtsp7\" (UniqueName: \"kubernetes.io/projected/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kube-api-access-xtsp7\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934454 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-public-key\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934507 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-temporary\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934580 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-openstack-config-secret\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934626 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kubeconfig\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934681 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ca-certs\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.934758 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ceph\") pod \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\" (UID: \"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff\") " Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935211 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935257 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935311 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935346 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935371 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935427 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935456 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935554 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q978j\" (UniqueName: \"kubernetes.io/projected/27f9f1e0-94f0-4652-a42a-26cfb348c583-kube-api-access-q978j\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935590 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935621 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.935667 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.937738 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.942252 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ceph" (OuterVolumeSpecName: "ceph") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.944792 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.947184 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kube-api-access-xtsp7" (OuterVolumeSpecName: "kube-api-access-xtsp7") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "kube-api-access-xtsp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.981211 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.981432 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.981667 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.981643 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:53:42 crc kubenswrapper[5016]: I1011 09:53:42.998153 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.011467 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.017182 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038185 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038240 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038270 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038297 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038316 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038346 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038382 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038413 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038470 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q978j\" (UniqueName: \"kubernetes.io/projected/27f9f1e0-94f0-4652-a42a-26cfb348c583-kube-api-access-q978j\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038505 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038542 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038571 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038639 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038711 5016 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-config\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038726 5016 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038741 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038754 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtsp7\" (UniqueName: \"kubernetes.io/projected/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kube-api-access-xtsp7\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038767 5016 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038780 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038794 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038809 5016 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-kubeconfig\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.038821 5016 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.040288 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.041517 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.041596 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.041944 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.043231 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.043648 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.045988 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.046406 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.048425 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.060022 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.065447 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q978j\" (UniqueName: \"kubernetes.io/projected/27f9f1e0-94f0-4652-a42a-26cfb348c583-kube-api-access-q978j\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.073367 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.144523 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:53:43 crc kubenswrapper[5016]: E1011 09:53:43.145217 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.175936 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.200806 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"0048ddc1-30d3-4acd-8fb4-e84a2eeefcff","Type":"ContainerDied","Data":"8e8efb7da910eb29ae2f086fdcd9c2a990fc631a880df189e2eb2575af692f74"} Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.200875 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8efb7da910eb29ae2f086fdcd9c2a990fc631a880df189e2eb2575af692f74" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.200940 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Oct 11 09:53:43 crc kubenswrapper[5016]: I1011 09:53:43.880124 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Oct 11 09:53:44 crc kubenswrapper[5016]: I1011 09:53:44.211826 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"27f9f1e0-94f0-4652-a42a-26cfb348c583","Type":"ContainerStarted","Data":"f0be08253012b50ed5c3226d3cdc7c7ad2db461596000942dce34ab684695a20"} Oct 11 09:53:44 crc kubenswrapper[5016]: I1011 09:53:44.650300 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff" (UID: "0048ddc1-30d3-4acd-8fb4-e84a2eeefcff"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:53:44 crc kubenswrapper[5016]: I1011 09:53:44.689267 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0048ddc1-30d3-4acd-8fb4-e84a2eeefcff-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.228645 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"27f9f1e0-94f0-4652-a42a-26cfb348c583","Type":"ContainerStarted","Data":"5247261bd2dcb9843b3770492c392eef7d37c786a21d5b79fccda29a36d616f9"} Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.260027 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s01-sanity" podStartSLOduration=3.260003334 podStartE2EDuration="3.260003334s" podCreationTimestamp="2025-10-11 09:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 09:53:45.255994237 +0000 UTC m=+8013.156450223" watchObservedRunningTime="2025-10-11 09:53:45.260003334 +0000 UTC m=+8013.160459290" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.644716 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4jxx"] Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.647342 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.676763 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4jxx"] Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.817518 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-utilities\") pod \"community-operators-c4jxx\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.818007 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-catalog-content\") pod \"community-operators-c4jxx\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.818356 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qc8b\" (UniqueName: \"kubernetes.io/projected/51ccd81c-092f-4112-9237-38c3133c0073-kube-api-access-7qc8b\") pod \"community-operators-c4jxx\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.921383 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-utilities\") pod \"community-operators-c4jxx\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.921589 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-catalog-content\") pod \"community-operators-c4jxx\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.921787 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qc8b\" (UniqueName: \"kubernetes.io/projected/51ccd81c-092f-4112-9237-38c3133c0073-kube-api-access-7qc8b\") pod \"community-operators-c4jxx\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.922128 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-utilities\") pod \"community-operators-c4jxx\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.922294 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-catalog-content\") pod \"community-operators-c4jxx\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.947060 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qc8b\" (UniqueName: \"kubernetes.io/projected/51ccd81c-092f-4112-9237-38c3133c0073-kube-api-access-7qc8b\") pod \"community-operators-c4jxx\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:45 crc kubenswrapper[5016]: I1011 09:53:45.986549 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:46 crc kubenswrapper[5016]: I1011 09:53:46.604170 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4jxx"] Oct 11 09:53:46 crc kubenswrapper[5016]: W1011 09:53:46.608146 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51ccd81c_092f_4112_9237_38c3133c0073.slice/crio-ae04bda0cf8eaa3c518977a5088bca08b650860da7203b75b998c3a151fc4a8e WatchSource:0}: Error finding container ae04bda0cf8eaa3c518977a5088bca08b650860da7203b75b998c3a151fc4a8e: Status 404 returned error can't find the container with id ae04bda0cf8eaa3c518977a5088bca08b650860da7203b75b998c3a151fc4a8e Oct 11 09:53:47 crc kubenswrapper[5016]: I1011 09:53:47.274900 5016 generic.go:334] "Generic (PLEG): container finished" podID="51ccd81c-092f-4112-9237-38c3133c0073" containerID="18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5" exitCode=0 Oct 11 09:53:47 crc kubenswrapper[5016]: I1011 09:53:47.275080 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4jxx" event={"ID":"51ccd81c-092f-4112-9237-38c3133c0073","Type":"ContainerDied","Data":"18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5"} Oct 11 09:53:47 crc kubenswrapper[5016]: I1011 09:53:47.275461 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4jxx" event={"ID":"51ccd81c-092f-4112-9237-38c3133c0073","Type":"ContainerStarted","Data":"ae04bda0cf8eaa3c518977a5088bca08b650860da7203b75b998c3a151fc4a8e"} Oct 11 09:53:48 crc kubenswrapper[5016]: I1011 09:53:48.288701 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4jxx" event={"ID":"51ccd81c-092f-4112-9237-38c3133c0073","Type":"ContainerStarted","Data":"f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda"} Oct 11 09:53:49 crc kubenswrapper[5016]: I1011 09:53:49.303731 5016 generic.go:334] "Generic (PLEG): container finished" podID="51ccd81c-092f-4112-9237-38c3133c0073" containerID="f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda" exitCode=0 Oct 11 09:53:49 crc kubenswrapper[5016]: I1011 09:53:49.303798 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4jxx" event={"ID":"51ccd81c-092f-4112-9237-38c3133c0073","Type":"ContainerDied","Data":"f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda"} Oct 11 09:53:50 crc kubenswrapper[5016]: I1011 09:53:50.323922 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4jxx" event={"ID":"51ccd81c-092f-4112-9237-38c3133c0073","Type":"ContainerStarted","Data":"71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663"} Oct 11 09:53:50 crc kubenswrapper[5016]: I1011 09:53:50.367334 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4jxx" podStartSLOduration=2.879500829 podStartE2EDuration="5.367297719s" podCreationTimestamp="2025-10-11 09:53:45 +0000 UTC" firstStartedPulling="2025-10-11 09:53:47.278933612 +0000 UTC m=+8015.179389558" lastFinishedPulling="2025-10-11 09:53:49.766730502 +0000 UTC m=+8017.667186448" observedRunningTime="2025-10-11 09:53:50.352478516 +0000 UTC m=+8018.252934472" watchObservedRunningTime="2025-10-11 09:53:50.367297719 +0000 UTC m=+8018.267753685" Oct 11 09:53:55 crc kubenswrapper[5016]: I1011 09:53:55.134283 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:53:55 crc kubenswrapper[5016]: E1011 09:53:55.135473 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:53:55 crc kubenswrapper[5016]: I1011 09:53:55.987612 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:55 crc kubenswrapper[5016]: I1011 09:53:55.990695 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:56 crc kubenswrapper[5016]: I1011 09:53:56.065553 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:56 crc kubenswrapper[5016]: I1011 09:53:56.488320 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:56 crc kubenswrapper[5016]: I1011 09:53:56.564437 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4jxx"] Oct 11 09:53:58 crc kubenswrapper[5016]: I1011 09:53:58.441777 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c4jxx" podUID="51ccd81c-092f-4112-9237-38c3133c0073" containerName="registry-server" containerID="cri-o://71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663" gracePeriod=2 Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.007759 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.067619 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qc8b\" (UniqueName: \"kubernetes.io/projected/51ccd81c-092f-4112-9237-38c3133c0073-kube-api-access-7qc8b\") pod \"51ccd81c-092f-4112-9237-38c3133c0073\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.067788 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-utilities\") pod \"51ccd81c-092f-4112-9237-38c3133c0073\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.067898 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-catalog-content\") pod \"51ccd81c-092f-4112-9237-38c3133c0073\" (UID: \"51ccd81c-092f-4112-9237-38c3133c0073\") " Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.069561 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-utilities" (OuterVolumeSpecName: "utilities") pod "51ccd81c-092f-4112-9237-38c3133c0073" (UID: "51ccd81c-092f-4112-9237-38c3133c0073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.077726 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ccd81c-092f-4112-9237-38c3133c0073-kube-api-access-7qc8b" (OuterVolumeSpecName: "kube-api-access-7qc8b") pod "51ccd81c-092f-4112-9237-38c3133c0073" (UID: "51ccd81c-092f-4112-9237-38c3133c0073"). InnerVolumeSpecName "kube-api-access-7qc8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.139633 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51ccd81c-092f-4112-9237-38c3133c0073" (UID: "51ccd81c-092f-4112-9237-38c3133c0073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.171331 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qc8b\" (UniqueName: \"kubernetes.io/projected/51ccd81c-092f-4112-9237-38c3133c0073-kube-api-access-7qc8b\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.171369 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.171378 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ccd81c-092f-4112-9237-38c3133c0073-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.455319 5016 generic.go:334] "Generic (PLEG): container finished" podID="51ccd81c-092f-4112-9237-38c3133c0073" containerID="71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663" exitCode=0 Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.455441 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4jxx" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.455461 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4jxx" event={"ID":"51ccd81c-092f-4112-9237-38c3133c0073","Type":"ContainerDied","Data":"71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663"} Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.456074 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4jxx" event={"ID":"51ccd81c-092f-4112-9237-38c3133c0073","Type":"ContainerDied","Data":"ae04bda0cf8eaa3c518977a5088bca08b650860da7203b75b998c3a151fc4a8e"} Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.456108 5016 scope.go:117] "RemoveContainer" containerID="71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.489449 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4jxx"] Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.495597 5016 scope.go:117] "RemoveContainer" containerID="f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.499320 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c4jxx"] Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.534740 5016 scope.go:117] "RemoveContainer" containerID="18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.603417 5016 scope.go:117] "RemoveContainer" containerID="71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663" Oct 11 09:53:59 crc kubenswrapper[5016]: E1011 09:53:59.604083 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663\": container with ID starting with 71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663 not found: ID does not exist" containerID="71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.604148 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663"} err="failed to get container status \"71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663\": rpc error: code = NotFound desc = could not find container \"71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663\": container with ID starting with 71345096398f3e584bdc0556ec4ce2427c89ed8dcdecd69e372c990e289c1663 not found: ID does not exist" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.604197 5016 scope.go:117] "RemoveContainer" containerID="f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda" Oct 11 09:53:59 crc kubenswrapper[5016]: E1011 09:53:59.604640 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda\": container with ID starting with f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda not found: ID does not exist" containerID="f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.604681 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda"} err="failed to get container status \"f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda\": rpc error: code = NotFound desc = could not find container \"f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda\": container with ID starting with f6730ae2bad14ed503c6a0538c72ff347e69678af47daa838026ace44b094eda not found: ID does not exist" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.604700 5016 scope.go:117] "RemoveContainer" containerID="18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5" Oct 11 09:53:59 crc kubenswrapper[5016]: E1011 09:53:59.604998 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5\": container with ID starting with 18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5 not found: ID does not exist" containerID="18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5" Oct 11 09:53:59 crc kubenswrapper[5016]: I1011 09:53:59.605025 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5"} err="failed to get container status \"18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5\": rpc error: code = NotFound desc = could not find container \"18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5\": container with ID starting with 18f7ea38000abd25c1a0d254c1aee976c495d7d68cc25a59f761002cbc4f8ff5 not found: ID does not exist" Oct 11 09:54:01 crc kubenswrapper[5016]: I1011 09:54:01.146605 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ccd81c-092f-4112-9237-38c3133c0073" path="/var/lib/kubelet/pods/51ccd81c-092f-4112-9237-38c3133c0073/volumes" Oct 11 09:54:07 crc kubenswrapper[5016]: I1011 09:54:07.144032 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:54:07 crc kubenswrapper[5016]: E1011 09:54:07.145327 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 09:54:21 crc kubenswrapper[5016]: I1011 09:54:21.133149 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:54:21 crc kubenswrapper[5016]: I1011 09:54:21.741311 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"9699157bc82b30e2fc03a2932d077fc463d754d44b6799fcd3045f3f2d912728"} Oct 11 09:55:26 crc kubenswrapper[5016]: I1011 09:55:26.511878 5016 generic.go:334] "Generic (PLEG): container finished" podID="27f9f1e0-94f0-4652-a42a-26cfb348c583" containerID="5247261bd2dcb9843b3770492c392eef7d37c786a21d5b79fccda29a36d616f9" exitCode=0 Oct 11 09:55:26 crc kubenswrapper[5016]: I1011 09:55:26.511966 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"27f9f1e0-94f0-4652-a42a-26cfb348c583","Type":"ContainerDied","Data":"5247261bd2dcb9843b3770492c392eef7d37c786a21d5b79fccda29a36d616f9"} Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.125291 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.323967 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-public-key\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324218 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q978j\" (UniqueName: \"kubernetes.io/projected/27f9f1e0-94f0-4652-a42a-26cfb348c583-kube-api-access-q978j\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324263 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-openstack-config-secret\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324338 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-clouds-config\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324418 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-config\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324471 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-private-key\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324569 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-workdir\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324720 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-kubeconfig\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324779 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-temporary\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324820 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324871 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ceph\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.324923 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ca-certs\") pod \"27f9f1e0-94f0-4652-a42a-26cfb348c583\" (UID: \"27f9f1e0-94f0-4652-a42a-26cfb348c583\") " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.326287 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.332725 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.333924 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ceph" (OuterVolumeSpecName: "ceph") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.337611 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f9f1e0-94f0-4652-a42a-26cfb348c583-kube-api-access-q978j" (OuterVolumeSpecName: "kube-api-access-q978j") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "kube-api-access-q978j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.362710 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.365091 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.367883 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.368463 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.383822 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.407525 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.414438 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.427931 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q978j\" (UniqueName: \"kubernetes.io/projected/27f9f1e0-94f0-4652-a42a-26cfb348c583-kube-api-access-q978j\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.427961 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.427975 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.427989 5016 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-config\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.428003 5016 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.428016 5016 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-kubeconfig\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.428028 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.428065 5016 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.428080 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.428092 5016 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/27f9f1e0-94f0-4652-a42a-26cfb348c583-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.428103 5016 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/27f9f1e0-94f0-4652-a42a-26cfb348c583-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.453775 5016 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.530004 5016 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.538078 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"27f9f1e0-94f0-4652-a42a-26cfb348c583","Type":"ContainerDied","Data":"f0be08253012b50ed5c3226d3cdc7c7ad2db461596000942dce34ab684695a20"} Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.538251 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0be08253012b50ed5c3226d3cdc7c7ad2db461596000942dce34ab684695a20" Oct 11 09:55:28 crc kubenswrapper[5016]: I1011 09:55:28.538155 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Oct 11 09:55:30 crc kubenswrapper[5016]: I1011 09:55:30.716166 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "27f9f1e0-94f0-4652-a42a-26cfb348c583" (UID: "27f9f1e0-94f0-4652-a42a-26cfb348c583"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:55:30 crc kubenswrapper[5016]: I1011 09:55:30.786757 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/27f9f1e0-94f0-4652-a42a-26cfb348c583-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.022647 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Oct 11 09:55:38 crc kubenswrapper[5016]: E1011 09:55:38.024074 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f9f1e0-94f0-4652-a42a-26cfb348c583" containerName="tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.024092 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f9f1e0-94f0-4652-a42a-26cfb348c583" containerName="tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: E1011 09:55:38.024119 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ccd81c-092f-4112-9237-38c3133c0073" containerName="extract-content" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.024126 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ccd81c-092f-4112-9237-38c3133c0073" containerName="extract-content" Oct 11 09:55:38 crc kubenswrapper[5016]: E1011 09:55:38.024163 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ccd81c-092f-4112-9237-38c3133c0073" containerName="extract-utilities" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.024170 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ccd81c-092f-4112-9237-38c3133c0073" containerName="extract-utilities" Oct 11 09:55:38 crc kubenswrapper[5016]: E1011 09:55:38.024189 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ccd81c-092f-4112-9237-38c3133c0073" containerName="registry-server" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.024196 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ccd81c-092f-4112-9237-38c3133c0073" containerName="registry-server" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.024399 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f9f1e0-94f0-4652-a42a-26cfb348c583" containerName="tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.024417 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ccd81c-092f-4112-9237-38c3133c0073" containerName="registry-server" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.025256 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.036497 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.191598 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c689123d-8a9f-44d5-836c-5ed1933c39de\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.192236 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5grj\" (UniqueName: \"kubernetes.io/projected/c689123d-8a9f-44d5-836c-5ed1933c39de-kube-api-access-w5grj\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c689123d-8a9f-44d5-836c-5ed1933c39de\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.295832 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c689123d-8a9f-44d5-836c-5ed1933c39de\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.295890 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5grj\" (UniqueName: \"kubernetes.io/projected/c689123d-8a9f-44d5-836c-5ed1933c39de-kube-api-access-w5grj\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c689123d-8a9f-44d5-836c-5ed1933c39de\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.296967 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c689123d-8a9f-44d5-836c-5ed1933c39de\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.324342 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5grj\" (UniqueName: \"kubernetes.io/projected/c689123d-8a9f-44d5-836c-5ed1933c39de-kube-api-access-w5grj\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c689123d-8a9f-44d5-836c-5ed1933c39de\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.329033 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"c689123d-8a9f-44d5-836c-5ed1933c39de\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.361426 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Oct 11 09:55:38 crc kubenswrapper[5016]: I1011 09:55:38.871014 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Oct 11 09:55:39 crc kubenswrapper[5016]: I1011 09:55:39.711460 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"c689123d-8a9f-44d5-836c-5ed1933c39de","Type":"ContainerStarted","Data":"c3fa1b2a2e89359230d4c1d4c74126140a278bab85ad66afea447bc92bedd7fb"} Oct 11 09:55:39 crc kubenswrapper[5016]: I1011 09:55:39.711521 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"c689123d-8a9f-44d5-836c-5ed1933c39de","Type":"ContainerStarted","Data":"6b3ceeadb33d3cc78fd6a1e9d6643cc271bdcb6aa7b42ba309fc3c59c5c1dc59"} Oct 11 09:55:39 crc kubenswrapper[5016]: I1011 09:55:39.740002 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" podStartSLOduration=2.245185461 podStartE2EDuration="2.739972411s" podCreationTimestamp="2025-10-11 09:55:37 +0000 UTC" firstStartedPulling="2025-10-11 09:55:38.875396527 +0000 UTC m=+8126.775852473" lastFinishedPulling="2025-10-11 09:55:39.370183477 +0000 UTC m=+8127.270639423" observedRunningTime="2025-10-11 09:55:39.728380693 +0000 UTC m=+8127.628836679" watchObservedRunningTime="2025-10-11 09:55:39.739972411 +0000 UTC m=+8127.640428397" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.728918 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ansibletest-ansibletest"] Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.731449 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.736635 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.740909 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.760236 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.839236 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.839323 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.839366 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.839399 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.839966 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.840237 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.840413 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4c5x\" (UniqueName: \"kubernetes.io/projected/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-kube-api-access-b4c5x\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.840495 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.840531 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ceph\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.840845 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.943697 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.943844 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.943888 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.943923 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.943958 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.944415 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.945068 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.945360 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.948620 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.948711 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.948764 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4c5x\" (UniqueName: \"kubernetes.io/projected/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-kube-api-access-b4c5x\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.948806 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.948833 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ceph\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.950040 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.956076 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.957278 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.957980 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.963311 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.980903 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ceph\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:57 crc kubenswrapper[5016]: I1011 09:55:57.982023 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4c5x\" (UniqueName: \"kubernetes.io/projected/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-kube-api-access-b4c5x\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:58 crc kubenswrapper[5016]: I1011 09:55:58.016837 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ansibletest-ansibletest\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " pod="openstack/ansibletest-ansibletest" Oct 11 09:55:58 crc kubenswrapper[5016]: I1011 09:55:58.076571 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Oct 11 09:55:58 crc kubenswrapper[5016]: I1011 09:55:58.586560 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Oct 11 09:55:58 crc kubenswrapper[5016]: I1011 09:55:58.993442 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"fc8a7f44-d84f-46e2-beb6-b94378f84bf2","Type":"ContainerStarted","Data":"059672b14e560bc33012501076d0e8121de8ebc1ee8968dee2a1edc6be380e79"} Oct 11 09:56:17 crc kubenswrapper[5016]: E1011 09:56:17.686037 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified" Oct 11 09:56:17 crc kubenswrapper[5016]: E1011 09:56:17.687905 5016 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 11 09:56:17 crc kubenswrapper[5016]: container &Container{Name:ansibletest-ansibletest,Image:quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_ANSIBLE_EXTRA_VARS,Value:-e manual_run=false,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_FILE_EXTRA_VARS,Value:--- Oct 11 09:56:17 crc kubenswrapper[5016]: foo: bar Oct 11 09:56:17 crc kubenswrapper[5016]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_BRANCH,Value:,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_REPO,Value:https://github.com/ansible/test-playbooks,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_INVENTORY,Value:localhost ansible_connection=local ansible_python_interpreter=python3 Oct 11 09:56:17 crc kubenswrapper[5016]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_PLAYBOOK,Value:./debug.yml,ValueFrom:nil,},EnvVar{Name:POD_DEBUG,Value:false,ValueFrom:nil,},EnvVar{Name:POD_INSTALL_COLLECTIONS,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{4 0} {} 4 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/ansible,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/AnsibleTests/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/ansible/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/var/lib/ansible/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:workload-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/test_keypair.key,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:compute-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/.ssh/compute_id,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b4c5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*227,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*227,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ansibletest-ansibletest_openstack(fc8a7f44-d84f-46e2-beb6-b94378f84bf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Oct 11 09:56:17 crc kubenswrapper[5016]: > logger="UnhandledError" Oct 11 09:56:17 crc kubenswrapper[5016]: E1011 09:56:17.689733 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ansibletest-ansibletest" podUID="fc8a7f44-d84f-46e2-beb6-b94378f84bf2" Oct 11 09:56:18 crc kubenswrapper[5016]: E1011 09:56:18.222697 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified\\\"\"" pod="openstack/ansibletest-ansibletest" podUID="fc8a7f44-d84f-46e2-beb6-b94378f84bf2" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.074380 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f7whr"] Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.078250 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.112117 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7whr"] Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.160779 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.182105 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598rw\" (UniqueName: \"kubernetes.io/projected/1727e517-c1ea-49d0-808a-af19cd82e896-kube-api-access-598rw\") pod \"redhat-marketplace-f7whr\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.182770 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-utilities\") pod \"redhat-marketplace-f7whr\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.182964 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-catalog-content\") pod \"redhat-marketplace-f7whr\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.286611 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-catalog-content\") pod \"redhat-marketplace-f7whr\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.286839 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-catalog-content\") pod \"redhat-marketplace-f7whr\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.290352 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-598rw\" (UniqueName: \"kubernetes.io/projected/1727e517-c1ea-49d0-808a-af19cd82e896-kube-api-access-598rw\") pod \"redhat-marketplace-f7whr\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.290478 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-utilities\") pod \"redhat-marketplace-f7whr\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.290996 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-utilities\") pod \"redhat-marketplace-f7whr\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.321531 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-598rw\" (UniqueName: \"kubernetes.io/projected/1727e517-c1ea-49d0-808a-af19cd82e896-kube-api-access-598rw\") pod \"redhat-marketplace-f7whr\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.412584 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:33 crc kubenswrapper[5016]: I1011 09:56:33.974793 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7whr"] Oct 11 09:56:34 crc kubenswrapper[5016]: I1011 09:56:34.420974 5016 generic.go:334] "Generic (PLEG): container finished" podID="1727e517-c1ea-49d0-808a-af19cd82e896" containerID="14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9" exitCode=0 Oct 11 09:56:34 crc kubenswrapper[5016]: I1011 09:56:34.421154 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7whr" event={"ID":"1727e517-c1ea-49d0-808a-af19cd82e896","Type":"ContainerDied","Data":"14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9"} Oct 11 09:56:34 crc kubenswrapper[5016]: I1011 09:56:34.421472 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7whr" event={"ID":"1727e517-c1ea-49d0-808a-af19cd82e896","Type":"ContainerStarted","Data":"39c3cb1d5e7c41dab386b31832ce949968efb67979a4ff01cbb15fe03f9fbaf9"} Oct 11 09:56:35 crc kubenswrapper[5016]: I1011 09:56:35.437703 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"fc8a7f44-d84f-46e2-beb6-b94378f84bf2","Type":"ContainerStarted","Data":"6aa69fd9c4f0e96bc5e563db8bdec28144f810bd34db623e9e6ad04c328b1391"} Oct 11 09:56:35 crc kubenswrapper[5016]: I1011 09:56:35.460628 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ansibletest-ansibletest" podStartSLOduration=4.270927863 podStartE2EDuration="39.460606989s" podCreationTimestamp="2025-10-11 09:55:56 +0000 UTC" firstStartedPulling="2025-10-11 09:55:58.600412951 +0000 UTC m=+8146.500868927" lastFinishedPulling="2025-10-11 09:56:33.790092107 +0000 UTC m=+8181.690548053" observedRunningTime="2025-10-11 09:56:35.455986436 +0000 UTC m=+8183.356442382" watchObservedRunningTime="2025-10-11 09:56:35.460606989 +0000 UTC m=+8183.361062935" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.271506 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dc7qm"] Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.276331 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.305295 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dc7qm"] Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.385307 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-utilities\") pod \"certified-operators-dc7qm\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.385427 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-catalog-content\") pod \"certified-operators-dc7qm\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.385473 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlln5\" (UniqueName: \"kubernetes.io/projected/027b196f-eb62-4af5-8fe3-d16faf3e04f5-kube-api-access-xlln5\") pod \"certified-operators-dc7qm\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.459277 5016 generic.go:334] "Generic (PLEG): container finished" podID="1727e517-c1ea-49d0-808a-af19cd82e896" containerID="4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86" exitCode=0 Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.459360 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7whr" event={"ID":"1727e517-c1ea-49d0-808a-af19cd82e896","Type":"ContainerDied","Data":"4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86"} Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.462930 5016 generic.go:334] "Generic (PLEG): container finished" podID="fc8a7f44-d84f-46e2-beb6-b94378f84bf2" containerID="6aa69fd9c4f0e96bc5e563db8bdec28144f810bd34db623e9e6ad04c328b1391" exitCode=0 Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.463150 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"fc8a7f44-d84f-46e2-beb6-b94378f84bf2","Type":"ContainerDied","Data":"6aa69fd9c4f0e96bc5e563db8bdec28144f810bd34db623e9e6ad04c328b1391"} Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.506105 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlln5\" (UniqueName: \"kubernetes.io/projected/027b196f-eb62-4af5-8fe3-d16faf3e04f5-kube-api-access-xlln5\") pod \"certified-operators-dc7qm\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.507160 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-utilities\") pod \"certified-operators-dc7qm\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.507567 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-catalog-content\") pod \"certified-operators-dc7qm\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.508499 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-catalog-content\") pod \"certified-operators-dc7qm\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.508992 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-utilities\") pod \"certified-operators-dc7qm\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.550012 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlln5\" (UniqueName: \"kubernetes.io/projected/027b196f-eb62-4af5-8fe3-d16faf3e04f5-kube-api-access-xlln5\") pod \"certified-operators-dc7qm\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:36 crc kubenswrapper[5016]: I1011 09:56:36.617725 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.123211 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.123574 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.255509 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dc7qm"] Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.480318 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7whr" event={"ID":"1727e517-c1ea-49d0-808a-af19cd82e896","Type":"ContainerStarted","Data":"1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498"} Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.482836 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dc7qm" event={"ID":"027b196f-eb62-4af5-8fe3-d16faf3e04f5","Type":"ContainerStarted","Data":"429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218"} Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.482863 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dc7qm" event={"ID":"027b196f-eb62-4af5-8fe3-d16faf3e04f5","Type":"ContainerStarted","Data":"22a3b6bcf346e87435788e389072cff1972f5e6be4b2c458883ff4ac79ef7204"} Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.507948 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f7whr" podStartSLOduration=2.035150579 podStartE2EDuration="4.5079216s" podCreationTimestamp="2025-10-11 09:56:33 +0000 UTC" firstStartedPulling="2025-10-11 09:56:34.42400422 +0000 UTC m=+8182.324460166" lastFinishedPulling="2025-10-11 09:56:36.896775241 +0000 UTC m=+8184.797231187" observedRunningTime="2025-10-11 09:56:37.507031806 +0000 UTC m=+8185.407487752" watchObservedRunningTime="2025-10-11 09:56:37.5079216 +0000 UTC m=+8185.408377546" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.844535 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952038 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ceph\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952179 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4c5x\" (UniqueName: \"kubernetes.io/projected/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-kube-api-access-b4c5x\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952292 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-workload-ssh-secret\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952379 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-temporary\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952440 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ca-certs\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952500 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952567 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config-secret\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952621 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952706 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-compute-ssh-secret\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.952910 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-workdir\") pod \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\" (UID: \"fc8a7f44-d84f-46e2-beb6-b94378f84bf2\") " Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.953555 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.970005 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-kube-api-access-b4c5x" (OuterVolumeSpecName: "kube-api-access-b4c5x") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "kube-api-access-b4c5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.972806 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ceph" (OuterVolumeSpecName: "ceph") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.976735 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.977190 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.991155 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:56:37 crc kubenswrapper[5016]: I1011 09:56:37.999164 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-compute-ssh-secret" (OuterVolumeSpecName: "compute-ssh-secret") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "compute-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.019327 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.030675 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.034410 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-workload-ssh-secret" (OuterVolumeSpecName: "workload-ssh-secret") pod "fc8a7f44-d84f-46e2-beb6-b94378f84bf2" (UID: "fc8a7f44-d84f-46e2-beb6-b94378f84bf2"). InnerVolumeSpecName "workload-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055723 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055776 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055792 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4c5x\" (UniqueName: \"kubernetes.io/projected/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-kube-api-access-b4c5x\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055808 5016 reconciler_common.go:293] "Volume detached for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-workload-ssh-secret\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055822 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055833 5016 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055881 5016 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055896 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055911 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.055928 5016 reconciler_common.go:293] "Volume detached for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/fc8a7f44-d84f-46e2-beb6-b94378f84bf2-compute-ssh-secret\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.079243 5016 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.159473 5016 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.494692 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.494789 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"fc8a7f44-d84f-46e2-beb6-b94378f84bf2","Type":"ContainerDied","Data":"059672b14e560bc33012501076d0e8121de8ebc1ee8968dee2a1edc6be380e79"} Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.495412 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059672b14e560bc33012501076d0e8121de8ebc1ee8968dee2a1edc6be380e79" Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.499424 5016 generic.go:334] "Generic (PLEG): container finished" podID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerID="429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218" exitCode=0 Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.499481 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dc7qm" event={"ID":"027b196f-eb62-4af5-8fe3-d16faf3e04f5","Type":"ContainerDied","Data":"429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218"} Oct 11 09:56:38 crc kubenswrapper[5016]: I1011 09:56:38.499509 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dc7qm" event={"ID":"027b196f-eb62-4af5-8fe3-d16faf3e04f5","Type":"ContainerStarted","Data":"0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6"} Oct 11 09:56:39 crc kubenswrapper[5016]: I1011 09:56:39.513694 5016 generic.go:334] "Generic (PLEG): container finished" podID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerID="0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6" exitCode=0 Oct 11 09:56:39 crc kubenswrapper[5016]: I1011 09:56:39.513729 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dc7qm" event={"ID":"027b196f-eb62-4af5-8fe3-d16faf3e04f5","Type":"ContainerDied","Data":"0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6"} Oct 11 09:56:40 crc kubenswrapper[5016]: I1011 09:56:40.532300 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dc7qm" event={"ID":"027b196f-eb62-4af5-8fe3-d16faf3e04f5","Type":"ContainerStarted","Data":"4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4"} Oct 11 09:56:40 crc kubenswrapper[5016]: I1011 09:56:40.564729 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dc7qm" podStartSLOduration=1.995303323 podStartE2EDuration="4.564703219s" podCreationTimestamp="2025-10-11 09:56:36 +0000 UTC" firstStartedPulling="2025-10-11 09:56:37.487762304 +0000 UTC m=+8185.388218240" lastFinishedPulling="2025-10-11 09:56:40.05716216 +0000 UTC m=+8187.957618136" observedRunningTime="2025-10-11 09:56:40.558514245 +0000 UTC m=+8188.458970191" watchObservedRunningTime="2025-10-11 09:56:40.564703219 +0000 UTC m=+8188.465159165" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.704029 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Oct 11 09:56:42 crc kubenswrapper[5016]: E1011 09:56:42.705017 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc8a7f44-d84f-46e2-beb6-b94378f84bf2" containerName="ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.705031 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc8a7f44-d84f-46e2-beb6-b94378f84bf2" containerName="ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.705252 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc8a7f44-d84f-46e2-beb6-b94378f84bf2" containerName="ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.706011 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.736495 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.805137 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"916a80a7-4fe0-4fd5-b04e-2f064dd291b3\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.805323 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv4vb\" (UniqueName: \"kubernetes.io/projected/916a80a7-4fe0-4fd5-b04e-2f064dd291b3-kube-api-access-rv4vb\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"916a80a7-4fe0-4fd5-b04e-2f064dd291b3\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.908510 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv4vb\" (UniqueName: \"kubernetes.io/projected/916a80a7-4fe0-4fd5-b04e-2f064dd291b3-kube-api-access-rv4vb\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"916a80a7-4fe0-4fd5-b04e-2f064dd291b3\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.908863 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"916a80a7-4fe0-4fd5-b04e-2f064dd291b3\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.909587 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"916a80a7-4fe0-4fd5-b04e-2f064dd291b3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.930781 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv4vb\" (UniqueName: \"kubernetes.io/projected/916a80a7-4fe0-4fd5-b04e-2f064dd291b3-kube-api-access-rv4vb\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"916a80a7-4fe0-4fd5-b04e-2f064dd291b3\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Oct 11 09:56:42 crc kubenswrapper[5016]: I1011 09:56:42.962997 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"916a80a7-4fe0-4fd5-b04e-2f064dd291b3\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Oct 11 09:56:43 crc kubenswrapper[5016]: I1011 09:56:43.041191 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Oct 11 09:56:43 crc kubenswrapper[5016]: I1011 09:56:43.364934 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Oct 11 09:56:43 crc kubenswrapper[5016]: I1011 09:56:43.413549 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:43 crc kubenswrapper[5016]: I1011 09:56:43.413629 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:43 crc kubenswrapper[5016]: I1011 09:56:43.489288 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:43 crc kubenswrapper[5016]: I1011 09:56:43.575253 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"916a80a7-4fe0-4fd5-b04e-2f064dd291b3","Type":"ContainerStarted","Data":"9db8f68403dfd088be7f74148436f16b1f1067b088164bca399229b65f0a0a27"} Oct 11 09:56:43 crc kubenswrapper[5016]: I1011 09:56:43.639916 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:44 crc kubenswrapper[5016]: I1011 09:56:44.591605 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"916a80a7-4fe0-4fd5-b04e-2f064dd291b3","Type":"ContainerStarted","Data":"747efc0984dffc9d3a7a4bd17041d3b9c35c928efca017a930c7c800edf65b3c"} Oct 11 09:56:44 crc kubenswrapper[5016]: I1011 09:56:44.621751 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" podStartSLOduration=2.102383851 podStartE2EDuration="2.621717223s" podCreationTimestamp="2025-10-11 09:56:42 +0000 UTC" firstStartedPulling="2025-10-11 09:56:43.382539667 +0000 UTC m=+8191.282995613" lastFinishedPulling="2025-10-11 09:56:43.901873039 +0000 UTC m=+8191.802328985" observedRunningTime="2025-10-11 09:56:44.611293746 +0000 UTC m=+8192.511749732" watchObservedRunningTime="2025-10-11 09:56:44.621717223 +0000 UTC m=+8192.522173219" Oct 11 09:56:44 crc kubenswrapper[5016]: I1011 09:56:44.659327 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7whr"] Oct 11 09:56:45 crc kubenswrapper[5016]: I1011 09:56:45.603336 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f7whr" podUID="1727e517-c1ea-49d0-808a-af19cd82e896" containerName="registry-server" containerID="cri-o://1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498" gracePeriod=2 Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.210129 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.315508 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-catalog-content\") pod \"1727e517-c1ea-49d0-808a-af19cd82e896\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.315613 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-utilities\") pod \"1727e517-c1ea-49d0-808a-af19cd82e896\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.315741 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-598rw\" (UniqueName: \"kubernetes.io/projected/1727e517-c1ea-49d0-808a-af19cd82e896-kube-api-access-598rw\") pod \"1727e517-c1ea-49d0-808a-af19cd82e896\" (UID: \"1727e517-c1ea-49d0-808a-af19cd82e896\") " Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.322813 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-utilities" (OuterVolumeSpecName: "utilities") pod "1727e517-c1ea-49d0-808a-af19cd82e896" (UID: "1727e517-c1ea-49d0-808a-af19cd82e896"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.337099 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1727e517-c1ea-49d0-808a-af19cd82e896-kube-api-access-598rw" (OuterVolumeSpecName: "kube-api-access-598rw") pod "1727e517-c1ea-49d0-808a-af19cd82e896" (UID: "1727e517-c1ea-49d0-808a-af19cd82e896"). InnerVolumeSpecName "kube-api-access-598rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.353583 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1727e517-c1ea-49d0-808a-af19cd82e896" (UID: "1727e517-c1ea-49d0-808a-af19cd82e896"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.419077 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.419123 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1727e517-c1ea-49d0-808a-af19cd82e896-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.419142 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-598rw\" (UniqueName: \"kubernetes.io/projected/1727e517-c1ea-49d0-808a-af19cd82e896-kube-api-access-598rw\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.618826 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.620113 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.621933 5016 generic.go:334] "Generic (PLEG): container finished" podID="1727e517-c1ea-49d0-808a-af19cd82e896" containerID="1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498" exitCode=0 Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.621963 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7whr" event={"ID":"1727e517-c1ea-49d0-808a-af19cd82e896","Type":"ContainerDied","Data":"1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498"} Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.621989 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f7whr" event={"ID":"1727e517-c1ea-49d0-808a-af19cd82e896","Type":"ContainerDied","Data":"39c3cb1d5e7c41dab386b31832ce949968efb67979a4ff01cbb15fe03f9fbaf9"} Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.622035 5016 scope.go:117] "RemoveContainer" containerID="1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.622077 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f7whr" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.663765 5016 scope.go:117] "RemoveContainer" containerID="4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.681320 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7whr"] Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.694849 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f7whr"] Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.706369 5016 scope.go:117] "RemoveContainer" containerID="14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.723345 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.765749 5016 scope.go:117] "RemoveContainer" containerID="1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498" Oct 11 09:56:46 crc kubenswrapper[5016]: E1011 09:56:46.766344 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498\": container with ID starting with 1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498 not found: ID does not exist" containerID="1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.766394 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498"} err="failed to get container status \"1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498\": rpc error: code = NotFound desc = could not find container \"1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498\": container with ID starting with 1a8296e649b5d48d01b478b83852e053a98ee12b6c7e236aeb574f1bb70af498 not found: ID does not exist" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.766430 5016 scope.go:117] "RemoveContainer" containerID="4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86" Oct 11 09:56:46 crc kubenswrapper[5016]: E1011 09:56:46.767044 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86\": container with ID starting with 4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86 not found: ID does not exist" containerID="4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.767170 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86"} err="failed to get container status \"4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86\": rpc error: code = NotFound desc = could not find container \"4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86\": container with ID starting with 4c8740b7ee0faf7b76099cf46c8e39fbdc09cf82725e37f44fd86889a9ebae86 not found: ID does not exist" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.767255 5016 scope.go:117] "RemoveContainer" containerID="14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9" Oct 11 09:56:46 crc kubenswrapper[5016]: E1011 09:56:46.767666 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9\": container with ID starting with 14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9 not found: ID does not exist" containerID="14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9" Oct 11 09:56:46 crc kubenswrapper[5016]: I1011 09:56:46.767698 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9"} err="failed to get container status \"14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9\": rpc error: code = NotFound desc = could not find container \"14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9\": container with ID starting with 14f272b31f15ed63c87e71b2c8d88e2c9ad53cd9b53d370aea36b54d93e8c7e9 not found: ID does not exist" Oct 11 09:56:47 crc kubenswrapper[5016]: I1011 09:56:47.161333 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1727e517-c1ea-49d0-808a-af19cd82e896" path="/var/lib/kubelet/pods/1727e517-c1ea-49d0-808a-af19cd82e896/volumes" Oct 11 09:56:47 crc kubenswrapper[5016]: I1011 09:56:47.715739 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:49 crc kubenswrapper[5016]: I1011 09:56:49.051901 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dc7qm"] Oct 11 09:56:49 crc kubenswrapper[5016]: I1011 09:56:49.662953 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dc7qm" podUID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerName="registry-server" containerID="cri-o://4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4" gracePeriod=2 Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.202474 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.333745 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-utilities\") pod \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.334131 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlln5\" (UniqueName: \"kubernetes.io/projected/027b196f-eb62-4af5-8fe3-d16faf3e04f5-kube-api-access-xlln5\") pod \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.334255 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-catalog-content\") pod \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\" (UID: \"027b196f-eb62-4af5-8fe3-d16faf3e04f5\") " Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.335470 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-utilities" (OuterVolumeSpecName: "utilities") pod "027b196f-eb62-4af5-8fe3-d16faf3e04f5" (UID: "027b196f-eb62-4af5-8fe3-d16faf3e04f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.342007 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/027b196f-eb62-4af5-8fe3-d16faf3e04f5-kube-api-access-xlln5" (OuterVolumeSpecName: "kube-api-access-xlln5") pod "027b196f-eb62-4af5-8fe3-d16faf3e04f5" (UID: "027b196f-eb62-4af5-8fe3-d16faf3e04f5"). InnerVolumeSpecName "kube-api-access-xlln5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.386402 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "027b196f-eb62-4af5-8fe3-d16faf3e04f5" (UID: "027b196f-eb62-4af5-8fe3-d16faf3e04f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.437362 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlln5\" (UniqueName: \"kubernetes.io/projected/027b196f-eb62-4af5-8fe3-d16faf3e04f5-kube-api-access-xlln5\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.437411 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.437424 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027b196f-eb62-4af5-8fe3-d16faf3e04f5-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.676786 5016 generic.go:334] "Generic (PLEG): container finished" podID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerID="4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4" exitCode=0 Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.676847 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dc7qm" event={"ID":"027b196f-eb62-4af5-8fe3-d16faf3e04f5","Type":"ContainerDied","Data":"4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4"} Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.676881 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dc7qm" event={"ID":"027b196f-eb62-4af5-8fe3-d16faf3e04f5","Type":"ContainerDied","Data":"22a3b6bcf346e87435788e389072cff1972f5e6be4b2c458883ff4ac79ef7204"} Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.676922 5016 scope.go:117] "RemoveContainer" containerID="4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.676936 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dc7qm" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.706103 5016 scope.go:117] "RemoveContainer" containerID="0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.721363 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dc7qm"] Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.732636 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dc7qm"] Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.749748 5016 scope.go:117] "RemoveContainer" containerID="429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.784132 5016 scope.go:117] "RemoveContainer" containerID="4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4" Oct 11 09:56:50 crc kubenswrapper[5016]: E1011 09:56:50.784894 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4\": container with ID starting with 4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4 not found: ID does not exist" containerID="4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.784949 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4"} err="failed to get container status \"4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4\": rpc error: code = NotFound desc = could not find container \"4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4\": container with ID starting with 4174fdf2376111e56de871fd2e88b3ae9396aee007a1ce5be9cd862c274e0ca4 not found: ID does not exist" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.784981 5016 scope.go:117] "RemoveContainer" containerID="0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6" Oct 11 09:56:50 crc kubenswrapper[5016]: E1011 09:56:50.785553 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6\": container with ID starting with 0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6 not found: ID does not exist" containerID="0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.785596 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6"} err="failed to get container status \"0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6\": rpc error: code = NotFound desc = could not find container \"0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6\": container with ID starting with 0f2f3ae7a2e5dfd4d2cba2f560149b138ac9ba1d1cd7d9382b95fdeacdc8eeb6 not found: ID does not exist" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.785626 5016 scope.go:117] "RemoveContainer" containerID="429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218" Oct 11 09:56:50 crc kubenswrapper[5016]: E1011 09:56:50.786070 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218\": container with ID starting with 429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218 not found: ID does not exist" containerID="429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218" Oct 11 09:56:50 crc kubenswrapper[5016]: I1011 09:56:50.786096 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218"} err="failed to get container status \"429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218\": rpc error: code = NotFound desc = could not find container \"429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218\": container with ID starting with 429097eb122523cca9b989fd869317b0d73e77c7c847ff1344a4c87c893ae218 not found: ID does not exist" Oct 11 09:56:51 crc kubenswrapper[5016]: I1011 09:56:51.144916 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" path="/var/lib/kubelet/pods/027b196f-eb62-4af5-8fe3-d16faf3e04f5/volumes" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.355733 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizontest-tests-horizontest"] Oct 11 09:57:02 crc kubenswrapper[5016]: E1011 09:57:02.358736 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1727e517-c1ea-49d0-808a-af19cd82e896" containerName="registry-server" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.358778 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1727e517-c1ea-49d0-808a-af19cd82e896" containerName="registry-server" Oct 11 09:57:02 crc kubenswrapper[5016]: E1011 09:57:02.358795 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerName="registry-server" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.358804 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerName="registry-server" Oct 11 09:57:02 crc kubenswrapper[5016]: E1011 09:57:02.358858 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerName="extract-content" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.359012 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerName="extract-content" Oct 11 09:57:02 crc kubenswrapper[5016]: E1011 09:57:02.359040 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerName="extract-utilities" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.359048 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerName="extract-utilities" Oct 11 09:57:02 crc kubenswrapper[5016]: E1011 09:57:02.359061 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1727e517-c1ea-49d0-808a-af19cd82e896" containerName="extract-utilities" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.359089 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1727e517-c1ea-49d0-808a-af19cd82e896" containerName="extract-utilities" Oct 11 09:57:02 crc kubenswrapper[5016]: E1011 09:57:02.359100 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1727e517-c1ea-49d0-808a-af19cd82e896" containerName="extract-content" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.359106 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="1727e517-c1ea-49d0-808a-af19cd82e896" containerName="extract-content" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.359440 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="1727e517-c1ea-49d0-808a-af19cd82e896" containerName="registry-server" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.359464 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="027b196f-eb62-4af5-8fe3-d16faf3e04f5" containerName="registry-server" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.361195 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.366623 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizontest-tests-horizontesthorizontest-config" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.367218 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.378683 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.458885 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.458998 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.459116 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.460278 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbvgv\" (UniqueName: \"kubernetes.io/projected/fe6aed9c-4fce-4eca-9854-ff7f25b64722-kube-api-access-lbvgv\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.460463 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.460518 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.460594 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.460869 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.563515 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbvgv\" (UniqueName: \"kubernetes.io/projected/fe6aed9c-4fce-4eca-9854-ff7f25b64722-kube-api-access-lbvgv\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.563594 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.563625 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.563686 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.563763 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.563853 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.563903 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.563938 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.564489 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.564568 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.564787 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.565244 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.574000 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.576572 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.577474 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.594283 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbvgv\" (UniqueName: \"kubernetes.io/projected/fe6aed9c-4fce-4eca-9854-ff7f25b64722-kube-api-access-lbvgv\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.604922 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"horizontest-tests-horizontest\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:02 crc kubenswrapper[5016]: I1011 09:57:02.692594 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Oct 11 09:57:03 crc kubenswrapper[5016]: I1011 09:57:03.220023 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Oct 11 09:57:03 crc kubenswrapper[5016]: I1011 09:57:03.860361 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"fe6aed9c-4fce-4eca-9854-ff7f25b64722","Type":"ContainerStarted","Data":"d877321b29633788239c50448c202e76794df16ea9d71b179ee195aa218bf50f"} Oct 11 09:57:07 crc kubenswrapper[5016]: I1011 09:57:07.125837 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:57:07 crc kubenswrapper[5016]: I1011 09:57:07.127019 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:57:19 crc kubenswrapper[5016]: E1011 09:57:19.051794 5016 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizontest:current-podified" Oct 11 09:57:19 crc kubenswrapper[5016]: E1011 09:57:19.054456 5016 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizontest-tests-horizontest,Image:quay.io/podified-antelope-centos9/openstack-horizontest:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADMIN_PASSWORD,Value:12345678,ValueFrom:nil,},EnvVar{Name:ADMIN_USERNAME,Value:admin,ValueFrom:nil,},EnvVar{Name:AUTH_URL,Value:https://keystone-public-openstack.apps-crc.testing,ValueFrom:nil,},EnvVar{Name:DASHBOARD_URL,Value:https://horizon-openstack.apps-crc.testing/,ValueFrom:nil,},EnvVar{Name:EXTRA_FLAG,Value:not pagination and test_users.py,ValueFrom:nil,},EnvVar{Name:FLAVOR_NAME,Value:m1.tiny,ValueFrom:nil,},EnvVar{Name:HORIZONTEST_DEBUG_MODE,Value:false,ValueFrom:nil,},EnvVar{Name:HORIZON_KEYS_FOLDER,Value:/etc/test_operator,ValueFrom:nil,},EnvVar{Name:HORIZON_LOGS_DIR_NAME,Value:horizon,ValueFrom:nil,},EnvVar{Name:HORIZON_REPO_BRANCH,Value:master,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE,Value:/var/lib/horizontest/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE_NAME,Value:cirros-0.6.2-x86_64-disk,ValueFrom:nil,},EnvVar{Name:IMAGE_URL,Value:http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:PASSWORD,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME_XPATH,Value://*[@class=\"context-project\"]//ancestor::ul,ValueFrom:nil,},EnvVar{Name:REPO_URL,Value:https://review.opendev.org/openstack/horizon,ValueFrom:nil,},EnvVar{Name:USER_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:USE_EXTERNAL_FILES,Value:True,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{1 0} {} 1 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/horizontest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/horizontest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/var/lib/horizontest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbvgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42455,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42455,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizontest-tests-horizontest_openstack(fe6aed9c-4fce-4eca-9854-ff7f25b64722): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 11 09:57:19 crc kubenswrapper[5016]: E1011 09:57:19.055862 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizontest-tests-horizontest" podUID="fe6aed9c-4fce-4eca-9854-ff7f25b64722" Oct 11 09:57:20 crc kubenswrapper[5016]: E1011 09:57:20.065628 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizontest:current-podified\\\"\"" pod="openstack/horizontest-tests-horizontest" podUID="fe6aed9c-4fce-4eca-9854-ff7f25b64722" Oct 11 09:57:36 crc kubenswrapper[5016]: I1011 09:57:36.375454 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"fe6aed9c-4fce-4eca-9854-ff7f25b64722","Type":"ContainerStarted","Data":"58ec3af6dc90260d2547d47dfc2f1f6d0562f9465070405e1c116935a0454a0c"} Oct 11 09:57:36 crc kubenswrapper[5016]: I1011 09:57:36.412178 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizontest-tests-horizontest" podStartSLOduration=3.935337806 podStartE2EDuration="35.412155932s" podCreationTimestamp="2025-10-11 09:57:01 +0000 UTC" firstStartedPulling="2025-10-11 09:57:03.22319668 +0000 UTC m=+8211.123652666" lastFinishedPulling="2025-10-11 09:57:34.700014806 +0000 UTC m=+8242.600470792" observedRunningTime="2025-10-11 09:57:36.408730691 +0000 UTC m=+8244.309186657" watchObservedRunningTime="2025-10-11 09:57:36.412155932 +0000 UTC m=+8244.312611888" Oct 11 09:57:37 crc kubenswrapper[5016]: I1011 09:57:37.122986 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:57:37 crc kubenswrapper[5016]: I1011 09:57:37.123080 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:57:37 crc kubenswrapper[5016]: I1011 09:57:37.123147 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 09:57:37 crc kubenswrapper[5016]: I1011 09:57:37.124287 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9699157bc82b30e2fc03a2932d077fc463d754d44b6799fcd3045f3f2d912728"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 09:57:37 crc kubenswrapper[5016]: I1011 09:57:37.124378 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://9699157bc82b30e2fc03a2932d077fc463d754d44b6799fcd3045f3f2d912728" gracePeriod=600 Oct 11 09:57:37 crc kubenswrapper[5016]: I1011 09:57:37.392028 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="9699157bc82b30e2fc03a2932d077fc463d754d44b6799fcd3045f3f2d912728" exitCode=0 Oct 11 09:57:37 crc kubenswrapper[5016]: I1011 09:57:37.392105 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"9699157bc82b30e2fc03a2932d077fc463d754d44b6799fcd3045f3f2d912728"} Oct 11 09:57:37 crc kubenswrapper[5016]: I1011 09:57:37.392735 5016 scope.go:117] "RemoveContainer" containerID="8704f57e8f383778c1c8b8fb4cb9ff8d30d0941d5567393ac9351cd9e08d30ce" Oct 11 09:57:38 crc kubenswrapper[5016]: I1011 09:57:38.410580 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0"} Oct 11 09:59:37 crc kubenswrapper[5016]: I1011 09:59:37.037421 5016 generic.go:334] "Generic (PLEG): container finished" podID="fe6aed9c-4fce-4eca-9854-ff7f25b64722" containerID="58ec3af6dc90260d2547d47dfc2f1f6d0562f9465070405e1c116935a0454a0c" exitCode=0 Oct 11 09:59:37 crc kubenswrapper[5016]: I1011 09:59:37.037555 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"fe6aed9c-4fce-4eca-9854-ff7f25b64722","Type":"ContainerDied","Data":"58ec3af6dc90260d2547d47dfc2f1f6d0562f9465070405e1c116935a0454a0c"} Oct 11 09:59:37 crc kubenswrapper[5016]: I1011 09:59:37.123052 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 09:59:37 crc kubenswrapper[5016]: I1011 09:59:37.123724 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.478243 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.663392 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-temporary\") pod \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.663531 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ca-certs\") pod \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.663565 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-clouds-config\") pod \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.663617 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-openstack-config-secret\") pod \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.663689 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbvgv\" (UniqueName: \"kubernetes.io/projected/fe6aed9c-4fce-4eca-9854-ff7f25b64722-kube-api-access-lbvgv\") pod \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.663751 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.663818 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ceph\") pod \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.663896 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-workdir\") pod \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\" (UID: \"fe6aed9c-4fce-4eca-9854-ff7f25b64722\") " Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.666889 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fe6aed9c-4fce-4eca-9854-ff7f25b64722" (UID: "fe6aed9c-4fce-4eca-9854-ff7f25b64722"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.672621 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ceph" (OuterVolumeSpecName: "ceph") pod "fe6aed9c-4fce-4eca-9854-ff7f25b64722" (UID: "fe6aed9c-4fce-4eca-9854-ff7f25b64722"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.673569 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe6aed9c-4fce-4eca-9854-ff7f25b64722-kube-api-access-lbvgv" (OuterVolumeSpecName: "kube-api-access-lbvgv") pod "fe6aed9c-4fce-4eca-9854-ff7f25b64722" (UID: "fe6aed9c-4fce-4eca-9854-ff7f25b64722"). InnerVolumeSpecName "kube-api-access-lbvgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.687452 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fe6aed9c-4fce-4eca-9854-ff7f25b64722" (UID: "fe6aed9c-4fce-4eca-9854-ff7f25b64722"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.737439 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fe6aed9c-4fce-4eca-9854-ff7f25b64722" (UID: "fe6aed9c-4fce-4eca-9854-ff7f25b64722"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.743960 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "fe6aed9c-4fce-4eca-9854-ff7f25b64722" (UID: "fe6aed9c-4fce-4eca-9854-ff7f25b64722"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.766976 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.767032 5016 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.767053 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbvgv\" (UniqueName: \"kubernetes.io/projected/fe6aed9c-4fce-4eca-9854-ff7f25b64722-kube-api-access-lbvgv\") on node \"crc\" DevicePath \"\"" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.767093 5016 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.767115 5016 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ceph\") on node \"crc\" DevicePath \"\"" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.767133 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.772794 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fe6aed9c-4fce-4eca-9854-ff7f25b64722" (UID: "fe6aed9c-4fce-4eca-9854-ff7f25b64722"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.804181 5016 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.869677 5016 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.869737 5016 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fe6aed9c-4fce-4eca-9854-ff7f25b64722-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.909895 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fe6aed9c-4fce-4eca-9854-ff7f25b64722" (UID: "fe6aed9c-4fce-4eca-9854-ff7f25b64722"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 09:59:38 crc kubenswrapper[5016]: I1011 09:59:38.973324 5016 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fe6aed9c-4fce-4eca-9854-ff7f25b64722-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 11 09:59:39 crc kubenswrapper[5016]: I1011 09:59:39.063922 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"fe6aed9c-4fce-4eca-9854-ff7f25b64722","Type":"ContainerDied","Data":"d877321b29633788239c50448c202e76794df16ea9d71b179ee195aa218bf50f"} Oct 11 09:59:39 crc kubenswrapper[5016]: I1011 09:59:39.063996 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d877321b29633788239c50448c202e76794df16ea9d71b179ee195aa218bf50f" Oct 11 09:59:39 crc kubenswrapper[5016]: I1011 09:59:39.064058 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.051059 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Oct 11 09:59:46 crc kubenswrapper[5016]: E1011 09:59:46.052784 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6aed9c-4fce-4eca-9854-ff7f25b64722" containerName="horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.052810 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6aed9c-4fce-4eca-9854-ff7f25b64722" containerName="horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.053206 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6aed9c-4fce-4eca-9854-ff7f25b64722" containerName="horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.054401 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.079457 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.252540 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"6d621294-76bf-47b2-a87c-07243414066e\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.253148 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hzcx\" (UniqueName: \"kubernetes.io/projected/6d621294-76bf-47b2-a87c-07243414066e-kube-api-access-5hzcx\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"6d621294-76bf-47b2-a87c-07243414066e\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.356325 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hzcx\" (UniqueName: \"kubernetes.io/projected/6d621294-76bf-47b2-a87c-07243414066e-kube-api-access-5hzcx\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"6d621294-76bf-47b2-a87c-07243414066e\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.356685 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"6d621294-76bf-47b2-a87c-07243414066e\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.357379 5016 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"6d621294-76bf-47b2-a87c-07243414066e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.398414 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hzcx\" (UniqueName: \"kubernetes.io/projected/6d621294-76bf-47b2-a87c-07243414066e-kube-api-access-5hzcx\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"6d621294-76bf-47b2-a87c-07243414066e\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.403974 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"6d621294-76bf-47b2-a87c-07243414066e\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: I1011 09:59:46.706206 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Oct 11 09:59:46 crc kubenswrapper[5016]: E1011 09:59:46.707039 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 09:59:47 crc kubenswrapper[5016]: E1011 09:59:47.110045 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 09:59:47 crc kubenswrapper[5016]: I1011 09:59:47.110311 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Oct 11 09:59:47 crc kubenswrapper[5016]: I1011 09:59:47.161153 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"6d621294-76bf-47b2-a87c-07243414066e","Type":"ContainerStarted","Data":"dab441969fff6d45700fb8bf943bb994a24f4834ec7a8581c44a4e77795102f4"} Oct 11 09:59:47 crc kubenswrapper[5016]: E1011 09:59:47.759839 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 09:59:48 crc kubenswrapper[5016]: I1011 09:59:48.175334 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"6d621294-76bf-47b2-a87c-07243414066e","Type":"ContainerStarted","Data":"962da924fc9aabcb15203deff02aba99a706c41c4542193489db931f68c07075"} Oct 11 09:59:48 crc kubenswrapper[5016]: E1011 09:59:48.176326 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 09:59:48 crc kubenswrapper[5016]: I1011 09:59:48.202689 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" podStartSLOduration=1.555422481 podStartE2EDuration="2.202635146s" podCreationTimestamp="2025-10-11 09:59:46 +0000 UTC" firstStartedPulling="2025-10-11 09:59:47.112517267 +0000 UTC m=+8375.012973213" lastFinishedPulling="2025-10-11 09:59:47.759729932 +0000 UTC m=+8375.660185878" observedRunningTime="2025-10-11 09:59:48.190678889 +0000 UTC m=+8376.091134865" watchObservedRunningTime="2025-10-11 09:59:48.202635146 +0000 UTC m=+8376.103091122" Oct 11 09:59:49 crc kubenswrapper[5016]: E1011 09:59:49.191276 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.167931 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg"] Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.172752 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.176889 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.177254 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.179251 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg"] Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.294002 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c31e5052-36c7-4623-a8fb-dbb61005c70f-secret-volume\") pod \"collect-profiles-29336280-fkqtg\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.294644 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c31e5052-36c7-4623-a8fb-dbb61005c70f-config-volume\") pod \"collect-profiles-29336280-fkqtg\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.294723 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nbfm\" (UniqueName: \"kubernetes.io/projected/c31e5052-36c7-4623-a8fb-dbb61005c70f-kube-api-access-7nbfm\") pod \"collect-profiles-29336280-fkqtg\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.397416 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nbfm\" (UniqueName: \"kubernetes.io/projected/c31e5052-36c7-4623-a8fb-dbb61005c70f-kube-api-access-7nbfm\") pod \"collect-profiles-29336280-fkqtg\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.397601 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c31e5052-36c7-4623-a8fb-dbb61005c70f-secret-volume\") pod \"collect-profiles-29336280-fkqtg\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.397795 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c31e5052-36c7-4623-a8fb-dbb61005c70f-config-volume\") pod \"collect-profiles-29336280-fkqtg\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.399331 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c31e5052-36c7-4623-a8fb-dbb61005c70f-config-volume\") pod \"collect-profiles-29336280-fkqtg\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.414045 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c31e5052-36c7-4623-a8fb-dbb61005c70f-secret-volume\") pod \"collect-profiles-29336280-fkqtg\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.423320 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nbfm\" (UniqueName: \"kubernetes.io/projected/c31e5052-36c7-4623-a8fb-dbb61005c70f-kube-api-access-7nbfm\") pod \"collect-profiles-29336280-fkqtg\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:00 crc kubenswrapper[5016]: I1011 10:00:00.499707 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:01 crc kubenswrapper[5016]: I1011 10:00:01.041576 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg"] Oct 11 10:00:01 crc kubenswrapper[5016]: W1011 10:00:01.076707 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc31e5052_36c7_4623_a8fb_dbb61005c70f.slice/crio-9075483c97f4b1e84ff551d09ebeec6f9095d14f941a0c678f843c3b3c323277 WatchSource:0}: Error finding container 9075483c97f4b1e84ff551d09ebeec6f9095d14f941a0c678f843c3b3c323277: Status 404 returned error can't find the container with id 9075483c97f4b1e84ff551d09ebeec6f9095d14f941a0c678f843c3b3c323277 Oct 11 10:00:01 crc kubenswrapper[5016]: I1011 10:00:01.362982 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" event={"ID":"c31e5052-36c7-4623-a8fb-dbb61005c70f","Type":"ContainerStarted","Data":"343abc0d81348ed38674520a5b6dd6e7c559b5adb123612b98a4beb140ffd003"} Oct 11 10:00:01 crc kubenswrapper[5016]: I1011 10:00:01.365617 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" event={"ID":"c31e5052-36c7-4623-a8fb-dbb61005c70f","Type":"ContainerStarted","Data":"9075483c97f4b1e84ff551d09ebeec6f9095d14f941a0c678f843c3b3c323277"} Oct 11 10:00:01 crc kubenswrapper[5016]: I1011 10:00:01.394708 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" podStartSLOduration=1.39468272 podStartE2EDuration="1.39468272s" podCreationTimestamp="2025-10-11 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:00:01.381744477 +0000 UTC m=+8389.282200443" watchObservedRunningTime="2025-10-11 10:00:01.39468272 +0000 UTC m=+8389.295138666" Oct 11 10:00:02 crc kubenswrapper[5016]: I1011 10:00:02.381632 5016 generic.go:334] "Generic (PLEG): container finished" podID="c31e5052-36c7-4623-a8fb-dbb61005c70f" containerID="343abc0d81348ed38674520a5b6dd6e7c559b5adb123612b98a4beb140ffd003" exitCode=0 Oct 11 10:00:02 crc kubenswrapper[5016]: I1011 10:00:02.381769 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" event={"ID":"c31e5052-36c7-4623-a8fb-dbb61005c70f","Type":"ContainerDied","Data":"343abc0d81348ed38674520a5b6dd6e7c559b5adb123612b98a4beb140ffd003"} Oct 11 10:00:03 crc kubenswrapper[5016]: I1011 10:00:03.905582 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:03 crc kubenswrapper[5016]: I1011 10:00:03.990329 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c31e5052-36c7-4623-a8fb-dbb61005c70f-secret-volume\") pod \"c31e5052-36c7-4623-a8fb-dbb61005c70f\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " Oct 11 10:00:03 crc kubenswrapper[5016]: I1011 10:00:03.990452 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c31e5052-36c7-4623-a8fb-dbb61005c70f-config-volume\") pod \"c31e5052-36c7-4623-a8fb-dbb61005c70f\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " Oct 11 10:00:03 crc kubenswrapper[5016]: I1011 10:00:03.990506 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nbfm\" (UniqueName: \"kubernetes.io/projected/c31e5052-36c7-4623-a8fb-dbb61005c70f-kube-api-access-7nbfm\") pod \"c31e5052-36c7-4623-a8fb-dbb61005c70f\" (UID: \"c31e5052-36c7-4623-a8fb-dbb61005c70f\") " Oct 11 10:00:03 crc kubenswrapper[5016]: I1011 10:00:03.992376 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31e5052-36c7-4623-a8fb-dbb61005c70f-config-volume" (OuterVolumeSpecName: "config-volume") pod "c31e5052-36c7-4623-a8fb-dbb61005c70f" (UID: "c31e5052-36c7-4623-a8fb-dbb61005c70f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 11 10:00:03 crc kubenswrapper[5016]: I1011 10:00:03.998506 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31e5052-36c7-4623-a8fb-dbb61005c70f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c31e5052-36c7-4623-a8fb-dbb61005c70f" (UID: "c31e5052-36c7-4623-a8fb-dbb61005c70f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:00:04 crc kubenswrapper[5016]: I1011 10:00:04.000343 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31e5052-36c7-4623-a8fb-dbb61005c70f-kube-api-access-7nbfm" (OuterVolumeSpecName: "kube-api-access-7nbfm") pod "c31e5052-36c7-4623-a8fb-dbb61005c70f" (UID: "c31e5052-36c7-4623-a8fb-dbb61005c70f"). InnerVolumeSpecName "kube-api-access-7nbfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:00:04 crc kubenswrapper[5016]: I1011 10:00:04.093816 5016 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c31e5052-36c7-4623-a8fb-dbb61005c70f-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 11 10:00:04 crc kubenswrapper[5016]: I1011 10:00:04.094331 5016 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c31e5052-36c7-4623-a8fb-dbb61005c70f-config-volume\") on node \"crc\" DevicePath \"\"" Oct 11 10:00:04 crc kubenswrapper[5016]: I1011 10:00:04.094346 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nbfm\" (UniqueName: \"kubernetes.io/projected/c31e5052-36c7-4623-a8fb-dbb61005c70f-kube-api-access-7nbfm\") on node \"crc\" DevicePath \"\"" Oct 11 10:00:04 crc kubenswrapper[5016]: I1011 10:00:04.429149 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" event={"ID":"c31e5052-36c7-4623-a8fb-dbb61005c70f","Type":"ContainerDied","Data":"9075483c97f4b1e84ff551d09ebeec6f9095d14f941a0c678f843c3b3c323277"} Oct 11 10:00:04 crc kubenswrapper[5016]: I1011 10:00:04.429215 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9075483c97f4b1e84ff551d09ebeec6f9095d14f941a0c678f843c3b3c323277" Oct 11 10:00:04 crc kubenswrapper[5016]: I1011 10:00:04.429326 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29336280-fkqtg" Oct 11 10:00:04 crc kubenswrapper[5016]: I1011 10:00:04.500456 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8"] Oct 11 10:00:04 crc kubenswrapper[5016]: I1011 10:00:04.513070 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29336235-jdwf8"] Oct 11 10:00:05 crc kubenswrapper[5016]: I1011 10:00:05.150897 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf704980-48d5-4212-8732-4c83679347d4" path="/var/lib/kubelet/pods/bf704980-48d5-4212-8732-4c83679347d4/volumes" Oct 11 10:00:07 crc kubenswrapper[5016]: I1011 10:00:07.122945 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 10:00:07 crc kubenswrapper[5016]: I1011 10:00:07.123862 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.080025 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cpk4d/must-gather-r7724"] Oct 11 10:00:27 crc kubenswrapper[5016]: E1011 10:00:27.081119 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31e5052-36c7-4623-a8fb-dbb61005c70f" containerName="collect-profiles" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.081136 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31e5052-36c7-4623-a8fb-dbb61005c70f" containerName="collect-profiles" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.081400 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31e5052-36c7-4623-a8fb-dbb61005c70f" containerName="collect-profiles" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.082690 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.085140 5016 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cpk4d"/"default-dockercfg-qw64w" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.085594 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cpk4d"/"openshift-service-ca.crt" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.085817 5016 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cpk4d"/"kube-root-ca.crt" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.092931 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cpk4d/must-gather-r7724"] Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.220450 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93db13a-8e27-49d3-897e-33e59fe40941-must-gather-output\") pod \"must-gather-r7724\" (UID: \"b93db13a-8e27-49d3-897e-33e59fe40941\") " pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.220888 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf5w7\" (UniqueName: \"kubernetes.io/projected/b93db13a-8e27-49d3-897e-33e59fe40941-kube-api-access-tf5w7\") pod \"must-gather-r7724\" (UID: \"b93db13a-8e27-49d3-897e-33e59fe40941\") " pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.323541 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93db13a-8e27-49d3-897e-33e59fe40941-must-gather-output\") pod \"must-gather-r7724\" (UID: \"b93db13a-8e27-49d3-897e-33e59fe40941\") " pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.324246 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf5w7\" (UniqueName: \"kubernetes.io/projected/b93db13a-8e27-49d3-897e-33e59fe40941-kube-api-access-tf5w7\") pod \"must-gather-r7724\" (UID: \"b93db13a-8e27-49d3-897e-33e59fe40941\") " pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.324334 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93db13a-8e27-49d3-897e-33e59fe40941-must-gather-output\") pod \"must-gather-r7724\" (UID: \"b93db13a-8e27-49d3-897e-33e59fe40941\") " pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.352184 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf5w7\" (UniqueName: \"kubernetes.io/projected/b93db13a-8e27-49d3-897e-33e59fe40941-kube-api-access-tf5w7\") pod \"must-gather-r7724\" (UID: \"b93db13a-8e27-49d3-897e-33e59fe40941\") " pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:00:27 crc kubenswrapper[5016]: I1011 10:00:27.405164 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:00:28 crc kubenswrapper[5016]: I1011 10:00:28.179976 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cpk4d/must-gather-r7724"] Oct 11 10:00:28 crc kubenswrapper[5016]: I1011 10:00:28.837861 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/must-gather-r7724" event={"ID":"b93db13a-8e27-49d3-897e-33e59fe40941","Type":"ContainerStarted","Data":"95d169a0eb0a76c37fd676c3169e2b4308ce556ae7e383ddce313b0d8640e37a"} Oct 11 10:00:34 crc kubenswrapper[5016]: I1011 10:00:34.897345 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/must-gather-r7724" event={"ID":"b93db13a-8e27-49d3-897e-33e59fe40941","Type":"ContainerStarted","Data":"79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7"} Oct 11 10:00:35 crc kubenswrapper[5016]: I1011 10:00:35.910927 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/must-gather-r7724" event={"ID":"b93db13a-8e27-49d3-897e-33e59fe40941","Type":"ContainerStarted","Data":"9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f"} Oct 11 10:00:35 crc kubenswrapper[5016]: I1011 10:00:35.936829 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cpk4d/must-gather-r7724" podStartSLOduration=2.9981530960000002 podStartE2EDuration="8.936798793s" podCreationTimestamp="2025-10-11 10:00:27 +0000 UTC" firstStartedPulling="2025-10-11 10:00:28.192620651 +0000 UTC m=+8416.093076597" lastFinishedPulling="2025-10-11 10:00:34.131266348 +0000 UTC m=+8422.031722294" observedRunningTime="2025-10-11 10:00:35.933024173 +0000 UTC m=+8423.833480129" watchObservedRunningTime="2025-10-11 10:00:35.936798793 +0000 UTC m=+8423.837254779" Oct 11 10:00:37 crc kubenswrapper[5016]: I1011 10:00:37.122343 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 10:00:37 crc kubenswrapper[5016]: I1011 10:00:37.122770 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 10:00:37 crc kubenswrapper[5016]: I1011 10:00:37.122823 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 10:00:37 crc kubenswrapper[5016]: I1011 10:00:37.123450 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 10:00:37 crc kubenswrapper[5016]: I1011 10:00:37.123503 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" gracePeriod=600 Oct 11 10:00:37 crc kubenswrapper[5016]: E1011 10:00:37.288304 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:00:37 crc kubenswrapper[5016]: E1011 10:00:37.327509 5016 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0633ed26_7b6a_4a20_92ba_569891d9faff.slice/crio-conmon-22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0633ed26_7b6a_4a20_92ba_569891d9faff.slice/crio-22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0.scope\": RecentStats: unable to find data in memory cache]" Oct 11 10:00:37 crc kubenswrapper[5016]: I1011 10:00:37.941917 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" exitCode=0 Oct 11 10:00:37 crc kubenswrapper[5016]: I1011 10:00:37.942021 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0"} Oct 11 10:00:37 crc kubenswrapper[5016]: I1011 10:00:37.942629 5016 scope.go:117] "RemoveContainer" containerID="9699157bc82b30e2fc03a2932d077fc463d754d44b6799fcd3045f3f2d912728" Oct 11 10:00:37 crc kubenswrapper[5016]: I1011 10:00:37.943859 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:00:37 crc kubenswrapper[5016]: E1011 10:00:37.944391 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:00:43 crc kubenswrapper[5016]: E1011 10:00:43.704235 5016 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.200:60534->38.102.83.200:34749: write tcp 38.102.83.200:60534->38.102.83.200:34749: write: broken pipe Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.110513 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cpk4d/crc-debug-6lg5g"] Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.113128 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.175470 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk98l\" (UniqueName: \"kubernetes.io/projected/ff588ea4-4165-4484-b436-db3115b3c20d-kube-api-access-rk98l\") pod \"crc-debug-6lg5g\" (UID: \"ff588ea4-4165-4484-b436-db3115b3c20d\") " pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.175971 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff588ea4-4165-4484-b436-db3115b3c20d-host\") pod \"crc-debug-6lg5g\" (UID: \"ff588ea4-4165-4484-b436-db3115b3c20d\") " pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.278620 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff588ea4-4165-4484-b436-db3115b3c20d-host\") pod \"crc-debug-6lg5g\" (UID: \"ff588ea4-4165-4484-b436-db3115b3c20d\") " pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.278963 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk98l\" (UniqueName: \"kubernetes.io/projected/ff588ea4-4165-4484-b436-db3115b3c20d-kube-api-access-rk98l\") pod \"crc-debug-6lg5g\" (UID: \"ff588ea4-4165-4484-b436-db3115b3c20d\") " pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.279161 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff588ea4-4165-4484-b436-db3115b3c20d-host\") pod \"crc-debug-6lg5g\" (UID: \"ff588ea4-4165-4484-b436-db3115b3c20d\") " pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.301726 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk98l\" (UniqueName: \"kubernetes.io/projected/ff588ea4-4165-4484-b436-db3115b3c20d-kube-api-access-rk98l\") pod \"crc-debug-6lg5g\" (UID: \"ff588ea4-4165-4484-b436-db3115b3c20d\") " pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.335383 5016 scope.go:117] "RemoveContainer" containerID="d403e123d85fdf45cd685d46ef6d03e9eaac90268e3f92e278231d2525359692" Oct 11 10:00:45 crc kubenswrapper[5016]: I1011 10:00:45.447351 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:00:45 crc kubenswrapper[5016]: W1011 10:00:45.480484 5016 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff588ea4_4165_4484_b436_db3115b3c20d.slice/crio-12a711f3e663c675f08014c2d25ee42481eabc04e83bef44851fb8c1555b1433 WatchSource:0}: Error finding container 12a711f3e663c675f08014c2d25ee42481eabc04e83bef44851fb8c1555b1433: Status 404 returned error can't find the container with id 12a711f3e663c675f08014c2d25ee42481eabc04e83bef44851fb8c1555b1433 Oct 11 10:00:46 crc kubenswrapper[5016]: I1011 10:00:46.065099 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" event={"ID":"ff588ea4-4165-4484-b436-db3115b3c20d","Type":"ContainerStarted","Data":"12a711f3e663c675f08014c2d25ee42481eabc04e83bef44851fb8c1555b1433"} Oct 11 10:00:50 crc kubenswrapper[5016]: I1011 10:00:50.134307 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:00:50 crc kubenswrapper[5016]: E1011 10:00:50.135285 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:00:58 crc kubenswrapper[5016]: E1011 10:00:58.134355 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.179817 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29336281-hclhq"] Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.182122 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.195004 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29336281-hclhq"] Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.212618 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bwsn\" (UniqueName: \"kubernetes.io/projected/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-kube-api-access-9bwsn\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.212803 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-fernet-keys\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.212902 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-combined-ca-bundle\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.213073 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-config-data\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.245405 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" event={"ID":"ff588ea4-4165-4484-b436-db3115b3c20d","Type":"ContainerStarted","Data":"8f7097e891dd9ec6cff3d48a705210d35c5b32363827d2f2c6790382d6f2ffc9"} Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.268369 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" podStartSLOduration=1.169805469 podStartE2EDuration="15.26834799s" podCreationTimestamp="2025-10-11 10:00:45 +0000 UTC" firstStartedPulling="2025-10-11 10:00:45.483407534 +0000 UTC m=+8433.383863480" lastFinishedPulling="2025-10-11 10:00:59.581950055 +0000 UTC m=+8447.482406001" observedRunningTime="2025-10-11 10:01:00.259374952 +0000 UTC m=+8448.159830898" watchObservedRunningTime="2025-10-11 10:01:00.26834799 +0000 UTC m=+8448.168803936" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.315420 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-config-data\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.315491 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bwsn\" (UniqueName: \"kubernetes.io/projected/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-kube-api-access-9bwsn\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.315555 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-fernet-keys\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.315610 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-combined-ca-bundle\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.323096 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-config-data\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.323751 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-combined-ca-bundle\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.325377 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-fernet-keys\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.339267 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bwsn\" (UniqueName: \"kubernetes.io/projected/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-kube-api-access-9bwsn\") pod \"keystone-cron-29336281-hclhq\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:00 crc kubenswrapper[5016]: I1011 10:01:00.524019 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:01 crc kubenswrapper[5016]: I1011 10:01:01.163588 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29336281-hclhq"] Oct 11 10:01:01 crc kubenswrapper[5016]: I1011 10:01:01.261054 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336281-hclhq" event={"ID":"a43a3340-cfb0-4494-996e-f0e4fa1b8f10","Type":"ContainerStarted","Data":"61f1446d9468c7d511537a1d3f8f9b94b1d6d10d9ede0e66c204e0c1f353ae07"} Oct 11 10:01:02 crc kubenswrapper[5016]: I1011 10:01:02.271848 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336281-hclhq" event={"ID":"a43a3340-cfb0-4494-996e-f0e4fa1b8f10","Type":"ContainerStarted","Data":"5b48fbc1130f620bf82d40c655f3a437303ed41b9b62180d29de34ab11f7ef1d"} Oct 11 10:01:02 crc kubenswrapper[5016]: I1011 10:01:02.302807 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29336281-hclhq" podStartSLOduration=2.302777019 podStartE2EDuration="2.302777019s" podCreationTimestamp="2025-10-11 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-11 10:01:02.294989163 +0000 UTC m=+8450.195445099" watchObservedRunningTime="2025-10-11 10:01:02.302777019 +0000 UTC m=+8450.203232965" Oct 11 10:01:03 crc kubenswrapper[5016]: I1011 10:01:03.142896 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:01:03 crc kubenswrapper[5016]: E1011 10:01:03.145197 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:01:08 crc kubenswrapper[5016]: I1011 10:01:08.329946 5016 generic.go:334] "Generic (PLEG): container finished" podID="a43a3340-cfb0-4494-996e-f0e4fa1b8f10" containerID="5b48fbc1130f620bf82d40c655f3a437303ed41b9b62180d29de34ab11f7ef1d" exitCode=0 Oct 11 10:01:08 crc kubenswrapper[5016]: I1011 10:01:08.330038 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336281-hclhq" event={"ID":"a43a3340-cfb0-4494-996e-f0e4fa1b8f10","Type":"ContainerDied","Data":"5b48fbc1130f620bf82d40c655f3a437303ed41b9b62180d29de34ab11f7ef1d"} Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.742362 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.845745 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-combined-ca-bundle\") pod \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.846112 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-config-data\") pod \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.846158 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bwsn\" (UniqueName: \"kubernetes.io/projected/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-kube-api-access-9bwsn\") pod \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.846365 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-fernet-keys\") pod \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\" (UID: \"a43a3340-cfb0-4494-996e-f0e4fa1b8f10\") " Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.855516 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-kube-api-access-9bwsn" (OuterVolumeSpecName: "kube-api-access-9bwsn") pod "a43a3340-cfb0-4494-996e-f0e4fa1b8f10" (UID: "a43a3340-cfb0-4494-996e-f0e4fa1b8f10"). InnerVolumeSpecName "kube-api-access-9bwsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.868879 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a43a3340-cfb0-4494-996e-f0e4fa1b8f10" (UID: "a43a3340-cfb0-4494-996e-f0e4fa1b8f10"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.906777 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a43a3340-cfb0-4494-996e-f0e4fa1b8f10" (UID: "a43a3340-cfb0-4494-996e-f0e4fa1b8f10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.921992 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-config-data" (OuterVolumeSpecName: "config-data") pod "a43a3340-cfb0-4494-996e-f0e4fa1b8f10" (UID: "a43a3340-cfb0-4494-996e-f0e4fa1b8f10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.950164 5016 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-config-data\") on node \"crc\" DevicePath \"\"" Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.951246 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bwsn\" (UniqueName: \"kubernetes.io/projected/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-kube-api-access-9bwsn\") on node \"crc\" DevicePath \"\"" Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.951369 5016 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 11 10:01:09 crc kubenswrapper[5016]: I1011 10:01:09.951506 5016 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a43a3340-cfb0-4494-996e-f0e4fa1b8f10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 11 10:01:10 crc kubenswrapper[5016]: I1011 10:01:10.377501 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29336281-hclhq" event={"ID":"a43a3340-cfb0-4494-996e-f0e4fa1b8f10","Type":"ContainerDied","Data":"61f1446d9468c7d511537a1d3f8f9b94b1d6d10d9ede0e66c204e0c1f353ae07"} Oct 11 10:01:10 crc kubenswrapper[5016]: I1011 10:01:10.377676 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f1446d9468c7d511537a1d3f8f9b94b1d6d10d9ede0e66c204e0c1f353ae07" Oct 11 10:01:10 crc kubenswrapper[5016]: I1011 10:01:10.377747 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29336281-hclhq" Oct 11 10:01:16 crc kubenswrapper[5016]: I1011 10:01:16.133408 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:01:16 crc kubenswrapper[5016]: E1011 10:01:16.134433 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:01:27 crc kubenswrapper[5016]: I1011 10:01:27.134011 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:01:27 crc kubenswrapper[5016]: E1011 10:01:27.136428 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:01:38 crc kubenswrapper[5016]: I1011 10:01:38.134038 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:01:38 crc kubenswrapper[5016]: E1011 10:01:38.135061 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:01:49 crc kubenswrapper[5016]: I1011 10:01:49.133629 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:01:49 crc kubenswrapper[5016]: E1011 10:01:49.134880 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:01:51 crc kubenswrapper[5016]: I1011 10:01:51.806432 5016 generic.go:334] "Generic (PLEG): container finished" podID="ff588ea4-4165-4484-b436-db3115b3c20d" containerID="8f7097e891dd9ec6cff3d48a705210d35c5b32363827d2f2c6790382d6f2ffc9" exitCode=0 Oct 11 10:01:51 crc kubenswrapper[5016]: I1011 10:01:51.806480 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" event={"ID":"ff588ea4-4165-4484-b436-db3115b3c20d","Type":"ContainerDied","Data":"8f7097e891dd9ec6cff3d48a705210d35c5b32363827d2f2c6790382d6f2ffc9"} Oct 11 10:01:52 crc kubenswrapper[5016]: I1011 10:01:52.929900 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:01:52 crc kubenswrapper[5016]: I1011 10:01:52.988335 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cpk4d/crc-debug-6lg5g"] Oct 11 10:01:52 crc kubenswrapper[5016]: I1011 10:01:52.998534 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cpk4d/crc-debug-6lg5g"] Oct 11 10:01:53 crc kubenswrapper[5016]: I1011 10:01:53.014802 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff588ea4-4165-4484-b436-db3115b3c20d-host\") pod \"ff588ea4-4165-4484-b436-db3115b3c20d\" (UID: \"ff588ea4-4165-4484-b436-db3115b3c20d\") " Oct 11 10:01:53 crc kubenswrapper[5016]: I1011 10:01:53.014877 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk98l\" (UniqueName: \"kubernetes.io/projected/ff588ea4-4165-4484-b436-db3115b3c20d-kube-api-access-rk98l\") pod \"ff588ea4-4165-4484-b436-db3115b3c20d\" (UID: \"ff588ea4-4165-4484-b436-db3115b3c20d\") " Oct 11 10:01:53 crc kubenswrapper[5016]: I1011 10:01:53.015002 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff588ea4-4165-4484-b436-db3115b3c20d-host" (OuterVolumeSpecName: "host") pod "ff588ea4-4165-4484-b436-db3115b3c20d" (UID: "ff588ea4-4165-4484-b436-db3115b3c20d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:01:53 crc kubenswrapper[5016]: I1011 10:01:53.015606 5016 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ff588ea4-4165-4484-b436-db3115b3c20d-host\") on node \"crc\" DevicePath \"\"" Oct 11 10:01:53 crc kubenswrapper[5016]: I1011 10:01:53.027059 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff588ea4-4165-4484-b436-db3115b3c20d-kube-api-access-rk98l" (OuterVolumeSpecName: "kube-api-access-rk98l") pod "ff588ea4-4165-4484-b436-db3115b3c20d" (UID: "ff588ea4-4165-4484-b436-db3115b3c20d"). InnerVolumeSpecName "kube-api-access-rk98l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:01:53 crc kubenswrapper[5016]: I1011 10:01:53.118256 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk98l\" (UniqueName: \"kubernetes.io/projected/ff588ea4-4165-4484-b436-db3115b3c20d-kube-api-access-rk98l\") on node \"crc\" DevicePath \"\"" Oct 11 10:01:53 crc kubenswrapper[5016]: I1011 10:01:53.149626 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff588ea4-4165-4484-b436-db3115b3c20d" path="/var/lib/kubelet/pods/ff588ea4-4165-4484-b436-db3115b3c20d/volumes" Oct 11 10:01:53 crc kubenswrapper[5016]: I1011 10:01:53.839893 5016 scope.go:117] "RemoveContainer" containerID="8f7097e891dd9ec6cff3d48a705210d35c5b32363827d2f2c6790382d6f2ffc9" Oct 11 10:01:53 crc kubenswrapper[5016]: I1011 10:01:53.840120 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-6lg5g" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.249761 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cpk4d/crc-debug-4lb7s"] Oct 11 10:01:54 crc kubenswrapper[5016]: E1011 10:01:54.255942 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff588ea4-4165-4484-b436-db3115b3c20d" containerName="container-00" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.255988 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff588ea4-4165-4484-b436-db3115b3c20d" containerName="container-00" Oct 11 10:01:54 crc kubenswrapper[5016]: E1011 10:01:54.256022 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43a3340-cfb0-4494-996e-f0e4fa1b8f10" containerName="keystone-cron" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.256040 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43a3340-cfb0-4494-996e-f0e4fa1b8f10" containerName="keystone-cron" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.256454 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43a3340-cfb0-4494-996e-f0e4fa1b8f10" containerName="keystone-cron" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.256511 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff588ea4-4165-4484-b436-db3115b3c20d" containerName="container-00" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.257701 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.343751 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5k4\" (UniqueName: \"kubernetes.io/projected/9f2d0348-293b-4479-99b5-a9cdf45797f7-kube-api-access-kc5k4\") pod \"crc-debug-4lb7s\" (UID: \"9f2d0348-293b-4479-99b5-a9cdf45797f7\") " pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.343952 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f2d0348-293b-4479-99b5-a9cdf45797f7-host\") pod \"crc-debug-4lb7s\" (UID: \"9f2d0348-293b-4479-99b5-a9cdf45797f7\") " pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.447636 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc5k4\" (UniqueName: \"kubernetes.io/projected/9f2d0348-293b-4479-99b5-a9cdf45797f7-kube-api-access-kc5k4\") pod \"crc-debug-4lb7s\" (UID: \"9f2d0348-293b-4479-99b5-a9cdf45797f7\") " pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.447803 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f2d0348-293b-4479-99b5-a9cdf45797f7-host\") pod \"crc-debug-4lb7s\" (UID: \"9f2d0348-293b-4479-99b5-a9cdf45797f7\") " pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.448023 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f2d0348-293b-4479-99b5-a9cdf45797f7-host\") pod \"crc-debug-4lb7s\" (UID: \"9f2d0348-293b-4479-99b5-a9cdf45797f7\") " pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.480691 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc5k4\" (UniqueName: \"kubernetes.io/projected/9f2d0348-293b-4479-99b5-a9cdf45797f7-kube-api-access-kc5k4\") pod \"crc-debug-4lb7s\" (UID: \"9f2d0348-293b-4479-99b5-a9cdf45797f7\") " pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.587601 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:54 crc kubenswrapper[5016]: I1011 10:01:54.857546 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" event={"ID":"9f2d0348-293b-4479-99b5-a9cdf45797f7","Type":"ContainerStarted","Data":"3d183fe249034afe52bb5228ef7c058a531ea57c5b03f443d05d174f6a593412"} Oct 11 10:01:55 crc kubenswrapper[5016]: I1011 10:01:55.874416 5016 generic.go:334] "Generic (PLEG): container finished" podID="9f2d0348-293b-4479-99b5-a9cdf45797f7" containerID="8af89672b025201b9aaacf07456c6b0a22f71cf2b4912f1cc5e0c87b9ed90402" exitCode=0 Oct 11 10:01:55 crc kubenswrapper[5016]: I1011 10:01:55.874493 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" event={"ID":"9f2d0348-293b-4479-99b5-a9cdf45797f7","Type":"ContainerDied","Data":"8af89672b025201b9aaacf07456c6b0a22f71cf2b4912f1cc5e0c87b9ed90402"} Oct 11 10:01:56 crc kubenswrapper[5016]: I1011 10:01:56.993466 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:57 crc kubenswrapper[5016]: I1011 10:01:57.102850 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc5k4\" (UniqueName: \"kubernetes.io/projected/9f2d0348-293b-4479-99b5-a9cdf45797f7-kube-api-access-kc5k4\") pod \"9f2d0348-293b-4479-99b5-a9cdf45797f7\" (UID: \"9f2d0348-293b-4479-99b5-a9cdf45797f7\") " Oct 11 10:01:57 crc kubenswrapper[5016]: I1011 10:01:57.103365 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f2d0348-293b-4479-99b5-a9cdf45797f7-host\") pod \"9f2d0348-293b-4479-99b5-a9cdf45797f7\" (UID: \"9f2d0348-293b-4479-99b5-a9cdf45797f7\") " Oct 11 10:01:57 crc kubenswrapper[5016]: I1011 10:01:57.103443 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f2d0348-293b-4479-99b5-a9cdf45797f7-host" (OuterVolumeSpecName: "host") pod "9f2d0348-293b-4479-99b5-a9cdf45797f7" (UID: "9f2d0348-293b-4479-99b5-a9cdf45797f7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:01:57 crc kubenswrapper[5016]: I1011 10:01:57.104457 5016 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9f2d0348-293b-4479-99b5-a9cdf45797f7-host\") on node \"crc\" DevicePath \"\"" Oct 11 10:01:57 crc kubenswrapper[5016]: I1011 10:01:57.110947 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f2d0348-293b-4479-99b5-a9cdf45797f7-kube-api-access-kc5k4" (OuterVolumeSpecName: "kube-api-access-kc5k4") pod "9f2d0348-293b-4479-99b5-a9cdf45797f7" (UID: "9f2d0348-293b-4479-99b5-a9cdf45797f7"). InnerVolumeSpecName "kube-api-access-kc5k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:01:57 crc kubenswrapper[5016]: I1011 10:01:57.206725 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc5k4\" (UniqueName: \"kubernetes.io/projected/9f2d0348-293b-4479-99b5-a9cdf45797f7-kube-api-access-kc5k4\") on node \"crc\" DevicePath \"\"" Oct 11 10:01:57 crc kubenswrapper[5016]: I1011 10:01:57.896269 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" Oct 11 10:01:57 crc kubenswrapper[5016]: I1011 10:01:57.896907 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/crc-debug-4lb7s" event={"ID":"9f2d0348-293b-4479-99b5-a9cdf45797f7","Type":"ContainerDied","Data":"3d183fe249034afe52bb5228ef7c058a531ea57c5b03f443d05d174f6a593412"} Oct 11 10:01:57 crc kubenswrapper[5016]: I1011 10:01:57.896985 5016 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d183fe249034afe52bb5228ef7c058a531ea57c5b03f443d05d174f6a593412" Oct 11 10:01:58 crc kubenswrapper[5016]: I1011 10:01:58.339643 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cpk4d/crc-debug-4lb7s"] Oct 11 10:01:58 crc kubenswrapper[5016]: I1011 10:01:58.349160 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cpk4d/crc-debug-4lb7s"] Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.028047 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ansibletest-ansibletest_fc8a7f44-d84f-46e2-beb6-b94378f84bf2/ansibletest-ansibletest/0.log" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.154235 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f2d0348-293b-4479-99b5-a9cdf45797f7" path="/var/lib/kubelet/pods/9f2d0348-293b-4479-99b5-a9cdf45797f7/volumes" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.231826 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d89d796fd-cgg68_06a479bd-8198-4dec-a682-6864aaaca48b/barbican-api/0.log" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.441163 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d89d796fd-cgg68_06a479bd-8198-4dec-a682-6864aaaca48b/barbican-api-log/0.log" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.643149 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cpk4d/crc-debug-qzlkw"] Oct 11 10:01:59 crc kubenswrapper[5016]: E1011 10:01:59.644055 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f2d0348-293b-4479-99b5-a9cdf45797f7" containerName="container-00" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.644133 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f2d0348-293b-4479-99b5-a9cdf45797f7" containerName="container-00" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.644427 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f2d0348-293b-4479-99b5-a9cdf45797f7" containerName="container-00" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.645321 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.662556 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7d69bb669b-tzvqn_91b7e91b-be39-4920-9227-a93b91338f97/barbican-keystone-listener/0.log" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.670553 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c0452a-e967-4160-ac57-cee4764c66a0-host\") pod \"crc-debug-qzlkw\" (UID: \"f9c0452a-e967-4160-ac57-cee4764c66a0\") " pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.670984 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbqf\" (UniqueName: \"kubernetes.io/projected/f9c0452a-e967-4160-ac57-cee4764c66a0-kube-api-access-tcbqf\") pod \"crc-debug-qzlkw\" (UID: \"f9c0452a-e967-4160-ac57-cee4764c66a0\") " pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.773405 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c0452a-e967-4160-ac57-cee4764c66a0-host\") pod \"crc-debug-qzlkw\" (UID: \"f9c0452a-e967-4160-ac57-cee4764c66a0\") " pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.773551 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcbqf\" (UniqueName: \"kubernetes.io/projected/f9c0452a-e967-4160-ac57-cee4764c66a0-kube-api-access-tcbqf\") pod \"crc-debug-qzlkw\" (UID: \"f9c0452a-e967-4160-ac57-cee4764c66a0\") " pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.773581 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c0452a-e967-4160-ac57-cee4764c66a0-host\") pod \"crc-debug-qzlkw\" (UID: \"f9c0452a-e967-4160-ac57-cee4764c66a0\") " pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.798498 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcbqf\" (UniqueName: \"kubernetes.io/projected/f9c0452a-e967-4160-ac57-cee4764c66a0-kube-api-access-tcbqf\") pod \"crc-debug-qzlkw\" (UID: \"f9c0452a-e967-4160-ac57-cee4764c66a0\") " pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:01:59 crc kubenswrapper[5016]: I1011 10:01:59.964629 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:02:00 crc kubenswrapper[5016]: I1011 10:02:00.133294 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:02:00 crc kubenswrapper[5016]: E1011 10:02:00.133589 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:02:00 crc kubenswrapper[5016]: I1011 10:02:00.227985 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-766567796c-wxh7x_f6adb0ef-cb20-4a74-b79b-feb46936d4cd/barbican-worker/0.log" Oct 11 10:02:00 crc kubenswrapper[5016]: I1011 10:02:00.350372 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7d69bb669b-tzvqn_91b7e91b-be39-4920-9227-a93b91338f97/barbican-keystone-listener-log/0.log" Oct 11 10:02:00 crc kubenswrapper[5016]: I1011 10:02:00.447959 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-766567796c-wxh7x_f6adb0ef-cb20-4a74-b79b-feb46936d4cd/barbican-worker-log/0.log" Oct 11 10:02:00 crc kubenswrapper[5016]: I1011 10:02:00.733997 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-m86w7_729b92c8-604e-4a61-b146-f0f4dc9d00d5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:00 crc kubenswrapper[5016]: I1011 10:02:00.944048 5016 generic.go:334] "Generic (PLEG): container finished" podID="f9c0452a-e967-4160-ac57-cee4764c66a0" containerID="4fcdf9e53fb3e9aa6d5a5fae6b4b06cc5919c99f02884a2e74875403c7c04fdd" exitCode=0 Oct 11 10:02:00 crc kubenswrapper[5016]: I1011 10:02:00.944099 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" event={"ID":"f9c0452a-e967-4160-ac57-cee4764c66a0","Type":"ContainerDied","Data":"4fcdf9e53fb3e9aa6d5a5fae6b4b06cc5919c99f02884a2e74875403c7c04fdd"} Oct 11 10:02:00 crc kubenswrapper[5016]: I1011 10:02:00.944131 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" event={"ID":"f9c0452a-e967-4160-ac57-cee4764c66a0","Type":"ContainerStarted","Data":"7ade8868e6a61a2e5772d05096d1e4c2c3bcc4bb126e544f74bdd8bb91aedd7f"} Oct 11 10:02:00 crc kubenswrapper[5016]: I1011 10:02:00.982426 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ae7b0f07-6360-46c1-8bc1-f89c5ac7a486/ceilometer-central-agent/1.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.006475 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cpk4d/crc-debug-qzlkw"] Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.015111 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cpk4d/crc-debug-qzlkw"] Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.108842 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ae7b0f07-6360-46c1-8bc1-f89c5ac7a486/ceilometer-central-agent/0.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.254760 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ae7b0f07-6360-46c1-8bc1-f89c5ac7a486/ceilometer-notification-agent/1.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.258560 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ae7b0f07-6360-46c1-8bc1-f89c5ac7a486/ceilometer-notification-agent/0.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.287505 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ae7b0f07-6360-46c1-8bc1-f89c5ac7a486/proxy-httpd/0.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.330456 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ae7b0f07-6360-46c1-8bc1-f89c5ac7a486/sg-core/0.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.477458 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-jgqg6_67bcbe14-936f-4ec2-bcf4-3d3cf876245d/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.548220 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kmgf2_c7f1dbc5-8326-4481-ac16-2f6737dd82b2/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.802737 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_82c29c3e-ac31-4662-9577-ebed98af9dbb/cinder-api-log/0.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.917457 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_82c29c3e-ac31-4662-9577-ebed98af9dbb/cinder-api/0.log" Oct 11 10:02:01 crc kubenswrapper[5016]: I1011 10:02:01.997627 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2b53eb06-1432-4059-9705-ffc917af76f7/cinder-backup/2.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.071280 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.101126 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2b53eb06-1432-4059-9705-ffc917af76f7/cinder-backup/1.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.122419 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_2b53eb06-1432-4059-9705-ffc917af76f7/probe/0.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.122519 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c0452a-e967-4160-ac57-cee4764c66a0-host\") pod \"f9c0452a-e967-4160-ac57-cee4764c66a0\" (UID: \"f9c0452a-e967-4160-ac57-cee4764c66a0\") " Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.122588 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcbqf\" (UniqueName: \"kubernetes.io/projected/f9c0452a-e967-4160-ac57-cee4764c66a0-kube-api-access-tcbqf\") pod \"f9c0452a-e967-4160-ac57-cee4764c66a0\" (UID: \"f9c0452a-e967-4160-ac57-cee4764c66a0\") " Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.122819 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9c0452a-e967-4160-ac57-cee4764c66a0-host" (OuterVolumeSpecName: "host") pod "f9c0452a-e967-4160-ac57-cee4764c66a0" (UID: "f9c0452a-e967-4160-ac57-cee4764c66a0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.138853 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c0452a-e967-4160-ac57-cee4764c66a0-kube-api-access-tcbqf" (OuterVolumeSpecName: "kube-api-access-tcbqf") pod "f9c0452a-e967-4160-ac57-cee4764c66a0" (UID: "f9c0452a-e967-4160-ac57-cee4764c66a0"). InnerVolumeSpecName "kube-api-access-tcbqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.224983 5016 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f9c0452a-e967-4160-ac57-cee4764c66a0-host\") on node \"crc\" DevicePath \"\"" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.225828 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcbqf\" (UniqueName: \"kubernetes.io/projected/f9c0452a-e967-4160-ac57-cee4764c66a0-kube-api-access-tcbqf\") on node \"crc\" DevicePath \"\"" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.272533 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_14ae562e-2b57-478f-89cd-8330105eacdf/cinder-scheduler/2.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.332095 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_14ae562e-2b57-478f-89cd-8330105eacdf/cinder-scheduler/1.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.410468 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_14ae562e-2b57-478f-89cd-8330105eacdf/probe/0.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.521761 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f928618b-f291-4249-a756-0636b1680e66/cinder-volume/2.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.560887 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f928618b-f291-4249-a756-0636b1680e66/cinder-volume/1.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.659142 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_f928618b-f291-4249-a756-0636b1680e66/probe/0.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.762078 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-lpplc_3618a2af-0e01-4e8e-858b-1096d1e36f7c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.897057 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-4x7q9_ac92dbbc-a41a-4471-b3ac-67bffdc8f342/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.968853 5016 scope.go:117] "RemoveContainer" containerID="4fcdf9e53fb3e9aa6d5a5fae6b4b06cc5919c99f02884a2e74875403c7c04fdd" Oct 11 10:02:02 crc kubenswrapper[5016]: I1011 10:02:02.968999 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/crc-debug-qzlkw" Oct 11 10:02:03 crc kubenswrapper[5016]: I1011 10:02:03.021626 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55d8975557-4nltc_111abe99-1817-4a1d-9a2e-a5973664c8d2/init/0.log" Oct 11 10:02:03 crc kubenswrapper[5016]: I1011 10:02:03.147063 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c0452a-e967-4160-ac57-cee4764c66a0" path="/var/lib/kubelet/pods/f9c0452a-e967-4160-ac57-cee4764c66a0/volumes" Oct 11 10:02:03 crc kubenswrapper[5016]: I1011 10:02:03.184983 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55d8975557-4nltc_111abe99-1817-4a1d-9a2e-a5973664c8d2/init/0.log" Oct 11 10:02:03 crc kubenswrapper[5016]: I1011 10:02:03.416186 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff59b536-c3bd-477d-acdf-a3fdfccff379/glance-httpd/0.log" Oct 11 10:02:03 crc kubenswrapper[5016]: I1011 10:02:03.484810 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ff59b536-c3bd-477d-acdf-a3fdfccff379/glance-log/0.log" Oct 11 10:02:03 crc kubenswrapper[5016]: I1011 10:02:03.494592 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55d8975557-4nltc_111abe99-1817-4a1d-9a2e-a5973664c8d2/dnsmasq-dns/0.log" Oct 11 10:02:03 crc kubenswrapper[5016]: I1011 10:02:03.676996 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_72403325-dc1b-43ab-9d1e-8c255ca43e5f/glance-httpd/0.log" Oct 11 10:02:03 crc kubenswrapper[5016]: I1011 10:02:03.691130 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_72403325-dc1b-43ab-9d1e-8c255ca43e5f/glance-log/0.log" Oct 11 10:02:03 crc kubenswrapper[5016]: I1011 10:02:03.904586 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65987df486-lvrh6_e3e9db46-849a-4957-a6ff-5a05cb5c9744/horizon/0.log" Oct 11 10:02:04 crc kubenswrapper[5016]: I1011 10:02:04.080843 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizontest-tests-horizontest_fe6aed9c-4fce-4eca-9854-ff7f25b64722/horizontest-tests-horizontest/0.log" Oct 11 10:02:04 crc kubenswrapper[5016]: I1011 10:02:04.230901 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-cbxxv_b801ca03-9cd3-4ac0-9012-2116bd01f414/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:04 crc kubenswrapper[5016]: I1011 10:02:04.437378 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-cc7wn_f552b6e7-65bd-47c2-8e62-068c1f04cb3e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:04 crc kubenswrapper[5016]: I1011 10:02:04.791783 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29336161-ldq74_26b2ff87-f55d-42c6-8bfc-19b73cfe7582/keystone-cron/0.log" Oct 11 10:02:05 crc kubenswrapper[5016]: I1011 10:02:05.003551 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29336221-cqpgd_fd20c922-e2e3-4d45-a4f7-559030c97500/keystone-cron/0.log" Oct 11 10:02:05 crc kubenswrapper[5016]: I1011 10:02:05.508900 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29336281-hclhq_a43a3340-cfb0-4494-996e-f0e4fa1b8f10/keystone-cron/0.log" Oct 11 10:02:05 crc kubenswrapper[5016]: I1011 10:02:05.658280 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65987df486-lvrh6_e3e9db46-849a-4957-a6ff-5a05cb5c9744/horizon-log/0.log" Oct 11 10:02:05 crc kubenswrapper[5016]: I1011 10:02:05.695004 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_7ab9562f-f510-4edb-b4a5-5a05687424f8/kube-state-metrics/0.log" Oct 11 10:02:05 crc kubenswrapper[5016]: I1011 10:02:05.954470 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vnr6s_05ad8521-c18a-40bb-bb25-8a981f9009b4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:06 crc kubenswrapper[5016]: I1011 10:02:06.206034 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_d897f62c-8566-4445-8061-d77ce1ac2cd5/manila-api-log/0.log" Oct 11 10:02:06 crc kubenswrapper[5016]: I1011 10:02:06.482360 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_d897f62c-8566-4445-8061-d77ce1ac2cd5/manila-api/0.log" Oct 11 10:02:06 crc kubenswrapper[5016]: I1011 10:02:06.575865 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_055e76cd-8fd8-437e-a065-6d64398ce2dd/manila-scheduler/1.log" Oct 11 10:02:06 crc kubenswrapper[5016]: I1011 10:02:06.624466 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_055e76cd-8fd8-437e-a065-6d64398ce2dd/manila-scheduler/0.log" Oct 11 10:02:06 crc kubenswrapper[5016]: I1011 10:02:06.682414 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_055e76cd-8fd8-437e-a065-6d64398ce2dd/probe/0.log" Oct 11 10:02:06 crc kubenswrapper[5016]: I1011 10:02:06.973101 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_99758dd3-4691-42ed-a3eb-aead6855e030/manila-share/1.log" Oct 11 10:02:07 crc kubenswrapper[5016]: I1011 10:02:07.039787 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_99758dd3-4691-42ed-a3eb-aead6855e030/manila-share/0.log" Oct 11 10:02:07 crc kubenswrapper[5016]: I1011 10:02:07.091301 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_99758dd3-4691-42ed-a3eb-aead6855e030/probe/0.log" Oct 11 10:02:07 crc kubenswrapper[5016]: I1011 10:02:07.151310 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-67cbf6496d-vrr6z_5847f047-0407-44fd-9c84-3599fbaac974/keystone-api/0.log" Oct 11 10:02:07 crc kubenswrapper[5016]: I1011 10:02:07.970424 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-tcr9s_862bdbf2-3427-4d44-90c0-fa61d1a9b3ba/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:08 crc kubenswrapper[5016]: I1011 10:02:08.728078 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c8b64649f-69xkr_1f964a68-6c63-4aba-bf48-6b8cdb1766f2/neutron-httpd/0.log" Oct 11 10:02:09 crc kubenswrapper[5016]: I1011 10:02:09.583636 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5c8b64649f-69xkr_1f964a68-6c63-4aba-bf48-6b8cdb1766f2/neutron-api/0.log" Oct 11 10:02:11 crc kubenswrapper[5016]: I1011 10:02:11.133756 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:02:11 crc kubenswrapper[5016]: E1011 10:02:11.135031 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:02:11 crc kubenswrapper[5016]: I1011 10:02:11.217309 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9ace03b9-7f45-49ca-ac24-3401d9820d71/nova-api-api/1.log" Oct 11 10:02:12 crc kubenswrapper[5016]: I1011 10:02:12.288555 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9ace03b9-7f45-49ca-ac24-3401d9820d71/nova-api-api/0.log" Oct 11 10:02:13 crc kubenswrapper[5016]: I1011 10:02:13.508198 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_94518be7-3770-4e6f-8f65-4e955b7bca60/nova-cell0-conductor-conductor/0.log" Oct 11 10:02:13 crc kubenswrapper[5016]: I1011 10:02:13.887776 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9ace03b9-7f45-49ca-ac24-3401d9820d71/nova-api-log/0.log" Oct 11 10:02:14 crc kubenswrapper[5016]: I1011 10:02:14.164411 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9ace03b9-7f45-49ca-ac24-3401d9820d71/nova-api-log/1.log" Oct 11 10:02:14 crc kubenswrapper[5016]: I1011 10:02:14.182362 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_83362aa1-9b92-4fb8-8ade-5ba3476c53d0/nova-cell1-conductor-conductor/0.log" Oct 11 10:02:14 crc kubenswrapper[5016]: I1011 10:02:14.516875 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-hvl7p_3920c74b-a214-4f41-975a-5ec0db3c3212/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:14 crc kubenswrapper[5016]: I1011 10:02:14.607098 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3d3b41fc-1445-4c10-b8fb-62007ac44a8d/nova-cell1-novncproxy-novncproxy/0.log" Oct 11 10:02:14 crc kubenswrapper[5016]: I1011 10:02:14.802793 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e182b619-d220-435a-80ed-74611b49f193/nova-metadata-log/0.log" Oct 11 10:02:15 crc kubenswrapper[5016]: I1011 10:02:15.440966 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d275b2c-beec-4696-a60f-6a31245767bb/mysql-bootstrap/0.log" Oct 11 10:02:15 crc kubenswrapper[5016]: I1011 10:02:15.667431 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d275b2c-beec-4696-a60f-6a31245767bb/mysql-bootstrap/0.log" Oct 11 10:02:15 crc kubenswrapper[5016]: I1011 10:02:15.680012 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b11896aa-37c5-4e47-9d73-73ca143b75b1/nova-scheduler-scheduler/0.log" Oct 11 10:02:15 crc kubenswrapper[5016]: I1011 10:02:15.896568 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d275b2c-beec-4696-a60f-6a31245767bb/galera/0.log" Oct 11 10:02:16 crc kubenswrapper[5016]: I1011 10:02:16.147346 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eee46d88-d3cf-428a-9808-f9bef1f292b7/mysql-bootstrap/0.log" Oct 11 10:02:16 crc kubenswrapper[5016]: I1011 10:02:16.403961 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eee46d88-d3cf-428a-9808-f9bef1f292b7/mysql-bootstrap/0.log" Oct 11 10:02:16 crc kubenswrapper[5016]: I1011 10:02:16.478018 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_eee46d88-d3cf-428a-9808-f9bef1f292b7/galera/0.log" Oct 11 10:02:16 crc kubenswrapper[5016]: I1011 10:02:16.720163 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_51360c57-7d92-4171-a855-69ba399ac0b7/openstackclient/0.log" Oct 11 10:02:16 crc kubenswrapper[5016]: I1011 10:02:16.771047 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6811d5d2-c174-41f6-a397-0bc4133297e9/memcached/0.log" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.008337 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-db7s5_69f3f361-bd63-4b18-afd7-3c64169af0a8/ovn-controller/0.log" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.133015 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-lc7pq_82c41981-5d91-478e-99ea-351277ca347e/openstack-network-exporter/0.log" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.254612 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5gxpx"] Oct 11 10:02:17 crc kubenswrapper[5016]: E1011 10:02:17.255298 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c0452a-e967-4160-ac57-cee4764c66a0" containerName="container-00" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.255327 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c0452a-e967-4160-ac57-cee4764c66a0" containerName="container-00" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.255579 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c0452a-e967-4160-ac57-cee4764c66a0" containerName="container-00" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.257370 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.315740 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5gxpx"] Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.356615 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5nkt_38a1ec3b-0cfb-4fdf-bcba-a434cf65a726/ovsdb-server-init/0.log" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.410199 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-utilities\") pod \"redhat-operators-5gxpx\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.410290 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbr2k\" (UniqueName: \"kubernetes.io/projected/a3257007-998f-4f73-b187-d66186d30926-kube-api-access-dbr2k\") pod \"redhat-operators-5gxpx\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.410420 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-catalog-content\") pod \"redhat-operators-5gxpx\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.512585 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-utilities\") pod \"redhat-operators-5gxpx\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.512678 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbr2k\" (UniqueName: \"kubernetes.io/projected/a3257007-998f-4f73-b187-d66186d30926-kube-api-access-dbr2k\") pod \"redhat-operators-5gxpx\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.512799 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-catalog-content\") pod \"redhat-operators-5gxpx\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.513426 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-catalog-content\") pod \"redhat-operators-5gxpx\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.513702 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-utilities\") pod \"redhat-operators-5gxpx\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.537221 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbr2k\" (UniqueName: \"kubernetes.io/projected/a3257007-998f-4f73-b187-d66186d30926-kube-api-access-dbr2k\") pod \"redhat-operators-5gxpx\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.592034 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.639016 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5nkt_38a1ec3b-0cfb-4fdf-bcba-a434cf65a726/ovsdb-server-init/0.log" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.668790 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5nkt_38a1ec3b-0cfb-4fdf-bcba-a434cf65a726/ovs-vswitchd/0.log" Oct 11 10:02:17 crc kubenswrapper[5016]: I1011 10:02:17.711337 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5nkt_38a1ec3b-0cfb-4fdf-bcba-a434cf65a726/ovsdb-server/0.log" Oct 11 10:02:18 crc kubenswrapper[5016]: I1011 10:02:18.060444 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-vpxld_5f8096c1-6a47-4cd2-828a-4d091b6c7f5b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:18 crc kubenswrapper[5016]: I1011 10:02:18.164346 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c/openstack-network-exporter/0.log" Oct 11 10:02:18 crc kubenswrapper[5016]: I1011 10:02:18.297150 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5gxpx"] Oct 11 10:02:18 crc kubenswrapper[5016]: I1011 10:02:18.323265 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d7df8570-ef3c-4ad2-bcee-a6d87fb9cd7c/ovn-northd/0.log" Oct 11 10:02:18 crc kubenswrapper[5016]: I1011 10:02:18.419071 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a0cc21a8-3016-4d64-9264-c153cf77e9a6/openstack-network-exporter/0.log" Oct 11 10:02:18 crc kubenswrapper[5016]: I1011 10:02:18.437979 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e182b619-d220-435a-80ed-74611b49f193/nova-metadata-metadata/0.log" Oct 11 10:02:18 crc kubenswrapper[5016]: I1011 10:02:18.500674 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a0cc21a8-3016-4d64-9264-c153cf77e9a6/ovsdbserver-nb/0.log" Oct 11 10:02:18 crc kubenswrapper[5016]: I1011 10:02:18.826701 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_775dfe20-0fff-42b7-863a-76e8deb52526/ovsdbserver-sb/0.log" Oct 11 10:02:18 crc kubenswrapper[5016]: I1011 10:02:18.852451 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_775dfe20-0fff-42b7-863a-76e8deb52526/openstack-network-exporter/0.log" Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.132466 5016 generic.go:334] "Generic (PLEG): container finished" podID="a3257007-998f-4f73-b187-d66186d30926" containerID="c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02" exitCode=0 Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.132536 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gxpx" event={"ID":"a3257007-998f-4f73-b187-d66186d30926","Type":"ContainerDied","Data":"c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02"} Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.132579 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gxpx" event={"ID":"a3257007-998f-4f73-b187-d66186d30926","Type":"ContainerStarted","Data":"7460ac414eaf496e07f2c53993bfab03ea26cb33d82d21c09d94f9a2374f0ff0"} Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.135396 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.153825 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ab2c11e-c631-4f54-8f27-51c6fed6f548/setup-container/0.log" Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.250693 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6695bc58f4-lkxqb_c9b80c42-4ba9-4f2d-96d4-b17c97c1b272/placement-api/0.log" Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.312224 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ab2c11e-c631-4f54-8f27-51c6fed6f548/setup-container/0.log" Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.399794 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8ab2c11e-c631-4f54-8f27-51c6fed6f548/rabbitmq/0.log" Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.527714 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1d694fc7-1470-43de-a417-fe670e0bace9/setup-container/0.log" Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.537337 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6695bc58f4-lkxqb_c9b80c42-4ba9-4f2d-96d4-b17c97c1b272/placement-log/0.log" Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.751721 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1d694fc7-1470-43de-a417-fe670e0bace9/setup-container/0.log" Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.786009 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_1d694fc7-1470-43de-a417-fe670e0bace9/rabbitmq/0.log" Oct 11 10:02:19 crc kubenswrapper[5016]: I1011 10:02:19.797213 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-6bwl6_f8062331-8483-42c3-a3a9-7bc28a3b2d44/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.046559 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-9xldc_aba973bc-fbe0-437c-a640-41a201be1735/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.144571 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gxpx" event={"ID":"a3257007-998f-4f73-b187-d66186d30926","Type":"ContainerStarted","Data":"289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871"} Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.155838 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-wc526_9c4708ad-365b-46c1-a1ad-5945ff855420/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.190588 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sv6h6_40e86602-48de-424b-a248-ce46f60b770d/ssh-known-hosts-edpm-deployment/0.log" Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.449178 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_fea8725f-5064-485b-8c4a-7992b2800394/tempest-tests-tempest-tests-runner/0.log" Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.481906 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_19930010-7a7e-4c76-a81e-85e049ff1da4/tempest-tests-tempest-tests-runner/0.log" Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.606353 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-ansibletest-ansibletest-ansibletest_916a80a7-4fe0-4fd5-b04e-2f064dd291b3/test-operator-logs-container/0.log" Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.686850 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-horizontest-horizontest-tests-horizontest_6d621294-76bf-47b2-a87c-07243414066e/test-operator-logs-container/0.log" Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.770900 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_413f979d-3cc5-4ecf-bbe0-6464cd03ecde/test-operator-logs-container/0.log" Oct 11 10:02:20 crc kubenswrapper[5016]: I1011 10:02:20.895501 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_c689123d-8a9f-44d5-836c-5ed1933c39de/test-operator-logs-container/0.log" Oct 11 10:02:21 crc kubenswrapper[5016]: I1011 10:02:21.115597 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_0048ddc1-30d3-4acd-8fb4-e84a2eeefcff/tobiko-tests-tobiko/0.log" Oct 11 10:02:21 crc kubenswrapper[5016]: I1011 10:02:21.212910 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_27f9f1e0-94f0-4652-a42a-26cfb348c583/tobiko-tests-tobiko/0.log" Oct 11 10:02:21 crc kubenswrapper[5016]: I1011 10:02:21.350536 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vktkx_87dcedab-e03e-4507-adc3-90a88862ca5e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 11 10:02:25 crc kubenswrapper[5016]: I1011 10:02:25.133952 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:02:25 crc kubenswrapper[5016]: E1011 10:02:25.134441 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 10:02:25 crc kubenswrapper[5016]: E1011 10:02:25.134842 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:02:26 crc kubenswrapper[5016]: I1011 10:02:26.200722 5016 generic.go:334] "Generic (PLEG): container finished" podID="a3257007-998f-4f73-b187-d66186d30926" containerID="289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871" exitCode=0 Oct 11 10:02:26 crc kubenswrapper[5016]: I1011 10:02:26.200939 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gxpx" event={"ID":"a3257007-998f-4f73-b187-d66186d30926","Type":"ContainerDied","Data":"289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871"} Oct 11 10:02:27 crc kubenswrapper[5016]: I1011 10:02:27.215888 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gxpx" event={"ID":"a3257007-998f-4f73-b187-d66186d30926","Type":"ContainerStarted","Data":"2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617"} Oct 11 10:02:27 crc kubenswrapper[5016]: I1011 10:02:27.243289 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5gxpx" podStartSLOduration=2.693873818 podStartE2EDuration="10.243269701s" podCreationTimestamp="2025-10-11 10:02:17 +0000 UTC" firstStartedPulling="2025-10-11 10:02:19.135134951 +0000 UTC m=+8527.035590897" lastFinishedPulling="2025-10-11 10:02:26.684530834 +0000 UTC m=+8534.584986780" observedRunningTime="2025-10-11 10:02:27.238179556 +0000 UTC m=+8535.138635502" watchObservedRunningTime="2025-10-11 10:02:27.243269701 +0000 UTC m=+8535.143725647" Oct 11 10:02:27 crc kubenswrapper[5016]: I1011 10:02:27.592273 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:27 crc kubenswrapper[5016]: I1011 10:02:27.592345 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:28 crc kubenswrapper[5016]: I1011 10:02:28.654913 5016 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5gxpx" podUID="a3257007-998f-4f73-b187-d66186d30926" containerName="registry-server" probeResult="failure" output=< Oct 11 10:02:28 crc kubenswrapper[5016]: timeout: failed to connect service ":50051" within 1s Oct 11 10:02:28 crc kubenswrapper[5016]: > Oct 11 10:02:37 crc kubenswrapper[5016]: I1011 10:02:37.134148 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:02:37 crc kubenswrapper[5016]: E1011 10:02:37.135383 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:02:37 crc kubenswrapper[5016]: I1011 10:02:37.652213 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:37 crc kubenswrapper[5016]: I1011 10:02:37.707014 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:37 crc kubenswrapper[5016]: I1011 10:02:37.898757 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5gxpx"] Oct 11 10:02:39 crc kubenswrapper[5016]: I1011 10:02:39.353387 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5gxpx" podUID="a3257007-998f-4f73-b187-d66186d30926" containerName="registry-server" containerID="cri-o://2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617" gracePeriod=2 Oct 11 10:02:39 crc kubenswrapper[5016]: I1011 10:02:39.962843 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:39 crc kubenswrapper[5016]: I1011 10:02:39.986573 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-utilities\") pod \"a3257007-998f-4f73-b187-d66186d30926\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " Oct 11 10:02:39 crc kubenswrapper[5016]: I1011 10:02:39.987131 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-catalog-content\") pod \"a3257007-998f-4f73-b187-d66186d30926\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " Oct 11 10:02:39 crc kubenswrapper[5016]: I1011 10:02:39.987583 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-utilities" (OuterVolumeSpecName: "utilities") pod "a3257007-998f-4f73-b187-d66186d30926" (UID: "a3257007-998f-4f73-b187-d66186d30926"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:02:39 crc kubenswrapper[5016]: I1011 10:02:39.987250 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbr2k\" (UniqueName: \"kubernetes.io/projected/a3257007-998f-4f73-b187-d66186d30926-kube-api-access-dbr2k\") pod \"a3257007-998f-4f73-b187-d66186d30926\" (UID: \"a3257007-998f-4f73-b187-d66186d30926\") " Oct 11 10:02:39 crc kubenswrapper[5016]: I1011 10:02:39.989396 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.002972 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3257007-998f-4f73-b187-d66186d30926-kube-api-access-dbr2k" (OuterVolumeSpecName: "kube-api-access-dbr2k") pod "a3257007-998f-4f73-b187-d66186d30926" (UID: "a3257007-998f-4f73-b187-d66186d30926"). InnerVolumeSpecName "kube-api-access-dbr2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.094946 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbr2k\" (UniqueName: \"kubernetes.io/projected/a3257007-998f-4f73-b187-d66186d30926-kube-api-access-dbr2k\") on node \"crc\" DevicePath \"\"" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.122418 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3257007-998f-4f73-b187-d66186d30926" (UID: "a3257007-998f-4f73-b187-d66186d30926"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.199514 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3257007-998f-4f73-b187-d66186d30926-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.384811 5016 generic.go:334] "Generic (PLEG): container finished" podID="a3257007-998f-4f73-b187-d66186d30926" containerID="2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617" exitCode=0 Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.384901 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gxpx" event={"ID":"a3257007-998f-4f73-b187-d66186d30926","Type":"ContainerDied","Data":"2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617"} Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.384983 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5gxpx" event={"ID":"a3257007-998f-4f73-b187-d66186d30926","Type":"ContainerDied","Data":"7460ac414eaf496e07f2c53993bfab03ea26cb33d82d21c09d94f9a2374f0ff0"} Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.385022 5016 scope.go:117] "RemoveContainer" containerID="2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.385052 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5gxpx" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.418963 5016 scope.go:117] "RemoveContainer" containerID="289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.445798 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5gxpx"] Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.460536 5016 scope.go:117] "RemoveContainer" containerID="c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.460921 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5gxpx"] Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.525596 5016 scope.go:117] "RemoveContainer" containerID="2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617" Oct 11 10:02:40 crc kubenswrapper[5016]: E1011 10:02:40.529439 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617\": container with ID starting with 2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617 not found: ID does not exist" containerID="2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.529506 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617"} err="failed to get container status \"2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617\": rpc error: code = NotFound desc = could not find container \"2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617\": container with ID starting with 2b92414a6625ba463cf28ab31d6c95f0079de40dc9d175202f9408c449b1f617 not found: ID does not exist" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.529550 5016 scope.go:117] "RemoveContainer" containerID="289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871" Oct 11 10:02:40 crc kubenswrapper[5016]: E1011 10:02:40.530199 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871\": container with ID starting with 289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871 not found: ID does not exist" containerID="289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.530260 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871"} err="failed to get container status \"289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871\": rpc error: code = NotFound desc = could not find container \"289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871\": container with ID starting with 289c9bb7a0b4be81d45ac7a5993bbdce45d58de7f70b484b4fa407afab3b1871 not found: ID does not exist" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.530303 5016 scope.go:117] "RemoveContainer" containerID="c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02" Oct 11 10:02:40 crc kubenswrapper[5016]: E1011 10:02:40.530874 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02\": container with ID starting with c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02 not found: ID does not exist" containerID="c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02" Oct 11 10:02:40 crc kubenswrapper[5016]: I1011 10:02:40.530912 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02"} err="failed to get container status \"c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02\": rpc error: code = NotFound desc = could not find container \"c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02\": container with ID starting with c9304004cbdb2342b3d520b9bdd9eaf5cb9169e32c42df6b62866e58d193cd02 not found: ID does not exist" Oct 11 10:02:41 crc kubenswrapper[5016]: I1011 10:02:41.172573 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3257007-998f-4f73-b187-d66186d30926" path="/var/lib/kubelet/pods/a3257007-998f-4f73-b187-d66186d30926/volumes" Oct 11 10:02:46 crc kubenswrapper[5016]: I1011 10:02:46.780247 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-658bdf4b74-5k87v_ce574485-559e-47ce-82d5-df9228ee47e9/manager/0.log" Oct 11 10:02:46 crc kubenswrapper[5016]: I1011 10:02:46.821612 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-658bdf4b74-5k87v_ce574485-559e-47ce-82d5-df9228ee47e9/kube-rbac-proxy/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.032902 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4_b22318de-e3c9-4d58-a758-443f2a6f4c9f/util/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.316404 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4_b22318de-e3c9-4d58-a758-443f2a6f4c9f/pull/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.329449 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4_b22318de-e3c9-4d58-a758-443f2a6f4c9f/pull/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.371090 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4_b22318de-e3c9-4d58-a758-443f2a6f4c9f/util/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.584282 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4_b22318de-e3c9-4d58-a758-443f2a6f4c9f/pull/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.642493 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4_b22318de-e3c9-4d58-a758-443f2a6f4c9f/util/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.674815 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bxrnf4_b22318de-e3c9-4d58-a758-443f2a6f4c9f/extract/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.845136 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7b7fb68549-g5rms_642e4a4e-69f3-4bb7-aa0d-55bb7809203a/kube-rbac-proxy/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.936420 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7b7fb68549-g5rms_642e4a4e-69f3-4bb7-aa0d-55bb7809203a/manager/0.log" Oct 11 10:02:47 crc kubenswrapper[5016]: I1011 10:02:47.997110 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-85d5d9dd78-8cjvz_6347c5af-b7ed-4498-be85-e9a818a0e0d4/kube-rbac-proxy/0.log" Oct 11 10:02:48 crc kubenswrapper[5016]: I1011 10:02:48.092897 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-85d5d9dd78-8cjvz_6347c5af-b7ed-4498-be85-e9a818a0e0d4/manager/0.log" Oct 11 10:02:48 crc kubenswrapper[5016]: I1011 10:02:48.243931 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84b9b84486-jvtkl_683143f6-ebe0-47fb-b6c3-96680e673ff7/kube-rbac-proxy/0.log" Oct 11 10:02:48 crc kubenswrapper[5016]: I1011 10:02:48.333530 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84b9b84486-jvtkl_683143f6-ebe0-47fb-b6c3-96680e673ff7/manager/0.log" Oct 11 10:02:48 crc kubenswrapper[5016]: I1011 10:02:48.516524 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-858f76bbdd-zbqbd_53845a5f-9403-4fc4-80b0-56a724bf5405/kube-rbac-proxy/0.log" Oct 11 10:02:48 crc kubenswrapper[5016]: I1011 10:02:48.580532 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-858f76bbdd-zbqbd_53845a5f-9403-4fc4-80b0-56a724bf5405/manager/0.log" Oct 11 10:02:48 crc kubenswrapper[5016]: I1011 10:02:48.720120 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7ffbcb7588-hdcq6_1877ae13-d74b-4a7c-9f26-10757d256474/kube-rbac-proxy/0.log" Oct 11 10:02:48 crc kubenswrapper[5016]: I1011 10:02:48.814343 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7ffbcb7588-hdcq6_1877ae13-d74b-4a7c-9f26-10757d256474/manager/0.log" Oct 11 10:02:48 crc kubenswrapper[5016]: I1011 10:02:48.862959 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-656bcbd775-q4rcq_f36d2ba0-eaa2-48d4-8367-3b718a86b54a/kube-rbac-proxy/0.log" Oct 11 10:02:49 crc kubenswrapper[5016]: I1011 10:02:49.103532 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9c5c78d49-d5p22_b53501aa-b72c-457d-ad20-1f57abd81645/kube-rbac-proxy/0.log" Oct 11 10:02:49 crc kubenswrapper[5016]: I1011 10:02:49.173090 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-656bcbd775-q4rcq_f36d2ba0-eaa2-48d4-8367-3b718a86b54a/manager/0.log" Oct 11 10:02:49 crc kubenswrapper[5016]: I1011 10:02:49.216902 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-9c5c78d49-d5p22_b53501aa-b72c-457d-ad20-1f57abd81645/manager/0.log" Oct 11 10:02:49 crc kubenswrapper[5016]: I1011 10:02:49.433576 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55b6b7c7b8-lcm96_e2954d6f-57ba-49c4-ac53-7aa4600cf1b2/kube-rbac-proxy/0.log" Oct 11 10:02:49 crc kubenswrapper[5016]: I1011 10:02:49.491501 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55b6b7c7b8-lcm96_e2954d6f-57ba-49c4-ac53-7aa4600cf1b2/manager/0.log" Oct 11 10:02:49 crc kubenswrapper[5016]: I1011 10:02:49.667412 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5f67fbc655-4t8kd_28386d6d-81d2-4b50-8e61-f82bffe1cec5/kube-rbac-proxy/0.log" Oct 11 10:02:49 crc kubenswrapper[5016]: I1011 10:02:49.777943 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f9fb45f8f-txvhv_c146e268-2093-47e5-aaa1-824de389d97a/kube-rbac-proxy/0.log" Oct 11 10:02:49 crc kubenswrapper[5016]: I1011 10:02:49.824914 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5f67fbc655-4t8kd_28386d6d-81d2-4b50-8e61-f82bffe1cec5/manager/0.log" Oct 11 10:02:49 crc kubenswrapper[5016]: I1011 10:02:49.952975 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-f9fb45f8f-txvhv_c146e268-2093-47e5-aaa1-824de389d97a/manager/0.log" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.102961 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-79d585cb66-27vlc_3da7b7ce-1358-4f29-851c-a1a95f1d5a6f/kube-rbac-proxy/0.log" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.134093 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:02:50 crc kubenswrapper[5016]: E1011 10:02:50.134557 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.136791 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-79d585cb66-27vlc_3da7b7ce-1358-4f29-851c-a1a95f1d5a6f/manager/0.log" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.371664 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5df598886f-2cwzg_81fc8139-b3e6-4aa4-a2a3-3488428fdd67/kube-rbac-proxy/0.log" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.371869 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69fdcfc5f5-2mll5_a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411/kube-rbac-proxy/0.log" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.460064 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5df598886f-2cwzg_81fc8139-b3e6-4aa4-a2a3-3488428fdd67/manager/0.log" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.535022 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69fdcfc5f5-2mll5_a4fb3be7-4bcb-4f6a-a1e8-619b47cbc411/manager/0.log" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.664053 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5956dffb7br27k2_28800e92-f7fd-4764-ab21-b7ea8bd13c48/kube-rbac-proxy/0.log" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.751386 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5956dffb7br27k2_28800e92-f7fd-4764-ab21-b7ea8bd13c48/manager/0.log" Oct 11 10:02:50 crc kubenswrapper[5016]: I1011 10:02:50.788408 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b95c8954b-jgt6v_1415832f-40ef-48e7-ab66-b556c5110bd0/kube-rbac-proxy/0.log" Oct 11 10:02:51 crc kubenswrapper[5016]: I1011 10:02:51.067674 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6dc495b7-bj7dv_7ccff651-e44c-47d6-85fc-7a34a992c1f5/kube-rbac-proxy/0.log" Oct 11 10:02:51 crc kubenswrapper[5016]: I1011 10:02:51.070716 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6dc495b7-bj7dv_7ccff651-e44c-47d6-85fc-7a34a992c1f5/operator/0.log" Oct 11 10:02:51 crc kubenswrapper[5016]: I1011 10:02:51.469985 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-79df5fb58c-pgsds_3ad1a6fa-ff96-40e9-ba42-6173bb1639be/kube-rbac-proxy/0.log" Oct 11 10:02:51 crc kubenswrapper[5016]: I1011 10:02:51.532413 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-79df5fb58c-pgsds_3ad1a6fa-ff96-40e9-ba42-6173bb1639be/manager/0.log" Oct 11 10:02:51 crc kubenswrapper[5016]: I1011 10:02:51.540730 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-l67lz_509e4b22-b583-43ca-9c36-bd2ce2b7e753/registry-server/0.log" Oct 11 10:02:51 crc kubenswrapper[5016]: I1011 10:02:51.725892 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-68b6c87b68-7nrt2_971b1fdb-ddf2-4662-b77f-e3b55ac12de7/kube-rbac-proxy/0.log" Oct 11 10:02:51 crc kubenswrapper[5016]: I1011 10:02:51.833086 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-68b6c87b68-7nrt2_971b1fdb-ddf2-4662-b77f-e3b55ac12de7/manager/0.log" Oct 11 10:02:51 crc kubenswrapper[5016]: I1011 10:02:51.989041 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-pwb65_465a2dcd-76d5-4af4-a791-e98a5dfbd2d4/operator/0.log" Oct 11 10:02:52 crc kubenswrapper[5016]: I1011 10:02:52.125925 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-db6d7f97b-4h6v7_314a9915-c5b2-45c6-ad73-17bcf42d80cc/manager/0.log" Oct 11 10:02:52 crc kubenswrapper[5016]: I1011 10:02:52.152950 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-db6d7f97b-4h6v7_314a9915-c5b2-45c6-ad73-17bcf42d80cc/kube-rbac-proxy/0.log" Oct 11 10:02:52 crc kubenswrapper[5016]: I1011 10:02:52.336259 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-67cfc6749b-rdhd7_2f220420-4c7f-4f2b-a295-940d7e2f22da/kube-rbac-proxy/0.log" Oct 11 10:02:52 crc kubenswrapper[5016]: I1011 10:02:52.535230 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-67cfc6749b-rdhd7_2f220420-4c7f-4f2b-a295-940d7e2f22da/manager/0.log" Oct 11 10:02:52 crc kubenswrapper[5016]: I1011 10:02:52.620135 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b95c8954b-jgt6v_1415832f-40ef-48e7-ab66-b556c5110bd0/manager/0.log" Oct 11 10:02:52 crc kubenswrapper[5016]: I1011 10:02:52.659411 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-556f69b4d6-whv65_fe5ced7a-fe54-4272-8b7a-4d576fc78f63/kube-rbac-proxy/0.log" Oct 11 10:02:52 crc kubenswrapper[5016]: I1011 10:02:52.692801 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-556f69b4d6-whv65_fe5ced7a-fe54-4272-8b7a-4d576fc78f63/manager/0.log" Oct 11 10:02:52 crc kubenswrapper[5016]: I1011 10:02:52.754865 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7f554bff7b-5tl82_c2a90822-b5db-4dcd-9bb0-6e6fdc371a49/kube-rbac-proxy/0.log" Oct 11 10:02:52 crc kubenswrapper[5016]: I1011 10:02:52.850770 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7f554bff7b-5tl82_c2a90822-b5db-4dcd-9bb0-6e6fdc371a49/manager/0.log" Oct 11 10:03:03 crc kubenswrapper[5016]: I1011 10:03:03.146752 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:03:03 crc kubenswrapper[5016]: E1011 10:03:03.147737 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:03:11 crc kubenswrapper[5016]: I1011 10:03:11.444099 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-msb9j_6e14139c-7a42-440e-b494-f2a6283a1acd/control-plane-machine-set-operator/0.log" Oct 11 10:03:11 crc kubenswrapper[5016]: I1011 10:03:11.632715 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-m5nhn_63c09395-8cfa-4337-8323-0a90e333579a/kube-rbac-proxy/0.log" Oct 11 10:03:11 crc kubenswrapper[5016]: I1011 10:03:11.680418 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-m5nhn_63c09395-8cfa-4337-8323-0a90e333579a/machine-api-operator/0.log" Oct 11 10:03:16 crc kubenswrapper[5016]: I1011 10:03:16.168402 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:03:16 crc kubenswrapper[5016]: E1011 10:03:16.170186 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:03:26 crc kubenswrapper[5016]: I1011 10:03:26.016941 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-5l6fp_f28820b4-4922-4f2a-961b-11049383375a/cert-manager-controller/0.log" Oct 11 10:03:26 crc kubenswrapper[5016]: I1011 10:03:26.293884 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-vx8qq_56428aa7-9fed-488d-af1c-d7a634826bab/cert-manager-cainjector/0.log" Oct 11 10:03:26 crc kubenswrapper[5016]: I1011 10:03:26.308950 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-v5lh6_0fcf1ca9-5b2e-42a5-843a-d9afe721c9dc/cert-manager-webhook/0.log" Oct 11 10:03:30 crc kubenswrapper[5016]: E1011 10:03:30.134325 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 10:03:31 crc kubenswrapper[5016]: I1011 10:03:31.134466 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:03:31 crc kubenswrapper[5016]: E1011 10:03:31.134862 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:03:40 crc kubenswrapper[5016]: I1011 10:03:40.056431 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-gtjdp_d5699eaf-722d-46fd-8bca-5f845e0b5b3c/nmstate-console-plugin/0.log" Oct 11 10:03:40 crc kubenswrapper[5016]: I1011 10:03:40.263540 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kcw7v_33757068-b0e8-4558-8190-b98cd699b641/nmstate-handler/0.log" Oct 11 10:03:40 crc kubenswrapper[5016]: I1011 10:03:40.337504 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-qbbbk_89d255e9-3579-4fc6-a528-c87e21b5f9a4/kube-rbac-proxy/0.log" Oct 11 10:03:40 crc kubenswrapper[5016]: I1011 10:03:40.356034 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-qbbbk_89d255e9-3579-4fc6-a528-c87e21b5f9a4/nmstate-metrics/0.log" Oct 11 10:03:40 crc kubenswrapper[5016]: I1011 10:03:40.506008 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-vfsmr_81c50358-e340-44b6-b6d4-8edb8ce9b712/nmstate-operator/0.log" Oct 11 10:03:40 crc kubenswrapper[5016]: I1011 10:03:40.574185 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-56bkg_3c5ef3cc-6a0f-4187-8c24-db02d2144312/nmstate-webhook/0.log" Oct 11 10:03:44 crc kubenswrapper[5016]: I1011 10:03:44.133598 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:03:44 crc kubenswrapper[5016]: E1011 10:03:44.134741 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.624038 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2b47n"] Oct 11 10:03:49 crc kubenswrapper[5016]: E1011 10:03:49.627065 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3257007-998f-4f73-b187-d66186d30926" containerName="extract-content" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.627099 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3257007-998f-4f73-b187-d66186d30926" containerName="extract-content" Oct 11 10:03:49 crc kubenswrapper[5016]: E1011 10:03:49.627130 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3257007-998f-4f73-b187-d66186d30926" containerName="registry-server" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.627138 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3257007-998f-4f73-b187-d66186d30926" containerName="registry-server" Oct 11 10:03:49 crc kubenswrapper[5016]: E1011 10:03:49.627171 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3257007-998f-4f73-b187-d66186d30926" containerName="extract-utilities" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.627180 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3257007-998f-4f73-b187-d66186d30926" containerName="extract-utilities" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.627478 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3257007-998f-4f73-b187-d66186d30926" containerName="registry-server" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.629558 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.637056 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2b47n"] Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.716322 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-utilities\") pod \"community-operators-2b47n\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.717564 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-catalog-content\") pod \"community-operators-2b47n\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.717944 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmcz8\" (UniqueName: \"kubernetes.io/projected/dc9916e7-fa10-4491-a199-f47bc8c2731f-kube-api-access-tmcz8\") pod \"community-operators-2b47n\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.820607 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-utilities\") pod \"community-operators-2b47n\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.820897 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-catalog-content\") pod \"community-operators-2b47n\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.821507 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-utilities\") pod \"community-operators-2b47n\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.821600 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-catalog-content\") pod \"community-operators-2b47n\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.821887 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmcz8\" (UniqueName: \"kubernetes.io/projected/dc9916e7-fa10-4491-a199-f47bc8c2731f-kube-api-access-tmcz8\") pod \"community-operators-2b47n\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.842834 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmcz8\" (UniqueName: \"kubernetes.io/projected/dc9916e7-fa10-4491-a199-f47bc8c2731f-kube-api-access-tmcz8\") pod \"community-operators-2b47n\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:49 crc kubenswrapper[5016]: I1011 10:03:49.958842 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:50 crc kubenswrapper[5016]: I1011 10:03:50.631087 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2b47n"] Oct 11 10:03:51 crc kubenswrapper[5016]: I1011 10:03:51.248755 5016 generic.go:334] "Generic (PLEG): container finished" podID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerID="2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f" exitCode=0 Oct 11 10:03:51 crc kubenswrapper[5016]: I1011 10:03:51.248917 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2b47n" event={"ID":"dc9916e7-fa10-4491-a199-f47bc8c2731f","Type":"ContainerDied","Data":"2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f"} Oct 11 10:03:51 crc kubenswrapper[5016]: I1011 10:03:51.249324 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2b47n" event={"ID":"dc9916e7-fa10-4491-a199-f47bc8c2731f","Type":"ContainerStarted","Data":"655936beef8cd0e7c4a81fdb64e79c617873771770e5c342abaf6688da9584e8"} Oct 11 10:03:52 crc kubenswrapper[5016]: I1011 10:03:52.266322 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2b47n" event={"ID":"dc9916e7-fa10-4491-a199-f47bc8c2731f","Type":"ContainerStarted","Data":"1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f"} Oct 11 10:03:54 crc kubenswrapper[5016]: I1011 10:03:54.293180 5016 generic.go:334] "Generic (PLEG): container finished" podID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerID="1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f" exitCode=0 Oct 11 10:03:54 crc kubenswrapper[5016]: I1011 10:03:54.293283 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2b47n" event={"ID":"dc9916e7-fa10-4491-a199-f47bc8c2731f","Type":"ContainerDied","Data":"1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f"} Oct 11 10:03:55 crc kubenswrapper[5016]: I1011 10:03:55.308608 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2b47n" event={"ID":"dc9916e7-fa10-4491-a199-f47bc8c2731f","Type":"ContainerStarted","Data":"a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6"} Oct 11 10:03:55 crc kubenswrapper[5016]: I1011 10:03:55.342334 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2b47n" podStartSLOduration=2.694941559 podStartE2EDuration="6.34230872s" podCreationTimestamp="2025-10-11 10:03:49 +0000 UTC" firstStartedPulling="2025-10-11 10:03:51.251975274 +0000 UTC m=+8619.152431220" lastFinishedPulling="2025-10-11 10:03:54.899342405 +0000 UTC m=+8622.799798381" observedRunningTime="2025-10-11 10:03:55.330811515 +0000 UTC m=+8623.231267491" watchObservedRunningTime="2025-10-11 10:03:55.34230872 +0000 UTC m=+8623.242764676" Oct 11 10:03:56 crc kubenswrapper[5016]: I1011 10:03:56.961153 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-phqqb_cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc/kube-rbac-proxy/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.133538 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:03:57 crc kubenswrapper[5016]: E1011 10:03:57.133923 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.144037 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-phqqb_cfb3d201-9e6b-47ac-ad15-6101ccb2c6dc/controller/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.235857 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-frr-files/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.497743 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-frr-files/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.535107 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-metrics/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.535538 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-reloader/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.600190 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-reloader/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.760010 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-reloader/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.801048 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-frr-files/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.804192 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-metrics/0.log" Oct 11 10:03:57 crc kubenswrapper[5016]: I1011 10:03:57.822651 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-metrics/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.037346 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-reloader/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.044306 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-metrics/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.058110 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/cp-frr-files/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.068682 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/controller/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.224995 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/frr-metrics/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.311564 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/kube-rbac-proxy-frr/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.330733 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/kube-rbac-proxy/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.499362 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/reloader/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.595372 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-sftjd_8be39f07-b456-4122-b5dd-3f02d8123d0c/frr-k8s-webhook-server/0.log" Oct 11 10:03:58 crc kubenswrapper[5016]: I1011 10:03:58.934187 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68db858d44-mvnpr_1d7ca76f-2ef2-4f7a-a7ba-be970c5145eb/manager/0.log" Oct 11 10:03:59 crc kubenswrapper[5016]: I1011 10:03:59.001188 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-56f64c4bc6-xrwjc_09c8faf5-28e5-4d11-ab20-ccf047a5433b/webhook-server/0.log" Oct 11 10:03:59 crc kubenswrapper[5016]: I1011 10:03:59.220575 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zjs4n_bb9e00d1-3bae-477f-b65b-9822fd6a5999/kube-rbac-proxy/0.log" Oct 11 10:03:59 crc kubenswrapper[5016]: I1011 10:03:59.776727 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zjs4n_bb9e00d1-3bae-477f-b65b-9822fd6a5999/speaker/0.log" Oct 11 10:03:59 crc kubenswrapper[5016]: I1011 10:03:59.959106 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:03:59 crc kubenswrapper[5016]: I1011 10:03:59.960564 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:04:00 crc kubenswrapper[5016]: I1011 10:04:00.019257 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:04:00 crc kubenswrapper[5016]: I1011 10:04:00.407331 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:04:00 crc kubenswrapper[5016]: I1011 10:04:00.464771 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2b47n"] Oct 11 10:04:00 crc kubenswrapper[5016]: I1011 10:04:00.824567 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jpstd_1746bd38-b574-4552-b9b2-e5d80ba72acf/frr/0.log" Oct 11 10:04:02 crc kubenswrapper[5016]: I1011 10:04:02.378757 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2b47n" podUID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerName="registry-server" containerID="cri-o://a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6" gracePeriod=2 Oct 11 10:04:02 crc kubenswrapper[5016]: I1011 10:04:02.930228 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.059711 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-utilities\") pod \"dc9916e7-fa10-4491-a199-f47bc8c2731f\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.059786 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-catalog-content\") pod \"dc9916e7-fa10-4491-a199-f47bc8c2731f\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.059997 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmcz8\" (UniqueName: \"kubernetes.io/projected/dc9916e7-fa10-4491-a199-f47bc8c2731f-kube-api-access-tmcz8\") pod \"dc9916e7-fa10-4491-a199-f47bc8c2731f\" (UID: \"dc9916e7-fa10-4491-a199-f47bc8c2731f\") " Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.061890 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-utilities" (OuterVolumeSpecName: "utilities") pod "dc9916e7-fa10-4491-a199-f47bc8c2731f" (UID: "dc9916e7-fa10-4491-a199-f47bc8c2731f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.092919 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9916e7-fa10-4491-a199-f47bc8c2731f-kube-api-access-tmcz8" (OuterVolumeSpecName: "kube-api-access-tmcz8") pod "dc9916e7-fa10-4491-a199-f47bc8c2731f" (UID: "dc9916e7-fa10-4491-a199-f47bc8c2731f"). InnerVolumeSpecName "kube-api-access-tmcz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.170536 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.171254 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmcz8\" (UniqueName: \"kubernetes.io/projected/dc9916e7-fa10-4491-a199-f47bc8c2731f-kube-api-access-tmcz8\") on node \"crc\" DevicePath \"\"" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.394352 5016 generic.go:334] "Generic (PLEG): container finished" podID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerID="a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6" exitCode=0 Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.394516 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2b47n" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.395436 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2b47n" event={"ID":"dc9916e7-fa10-4491-a199-f47bc8c2731f","Type":"ContainerDied","Data":"a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6"} Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.395482 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2b47n" event={"ID":"dc9916e7-fa10-4491-a199-f47bc8c2731f","Type":"ContainerDied","Data":"655936beef8cd0e7c4a81fdb64e79c617873771770e5c342abaf6688da9584e8"} Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.395514 5016 scope.go:117] "RemoveContainer" containerID="a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.441173 5016 scope.go:117] "RemoveContainer" containerID="1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.469047 5016 scope.go:117] "RemoveContainer" containerID="2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.508060 5016 scope.go:117] "RemoveContainer" containerID="a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6" Oct 11 10:04:03 crc kubenswrapper[5016]: E1011 10:04:03.508864 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6\": container with ID starting with a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6 not found: ID does not exist" containerID="a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.508911 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6"} err="failed to get container status \"a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6\": rpc error: code = NotFound desc = could not find container \"a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6\": container with ID starting with a3c558c70e3bd7b1805155fb295acf3778873d9abc56c4e4a072c457f0f17db6 not found: ID does not exist" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.508944 5016 scope.go:117] "RemoveContainer" containerID="1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f" Oct 11 10:04:03 crc kubenswrapper[5016]: E1011 10:04:03.509553 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f\": container with ID starting with 1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f not found: ID does not exist" containerID="1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.509622 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f"} err="failed to get container status \"1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f\": rpc error: code = NotFound desc = could not find container \"1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f\": container with ID starting with 1e64fa2e501f0cdf9033319ee411354d7eb419d1d4567607041e496d7bc5333f not found: ID does not exist" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.509734 5016 scope.go:117] "RemoveContainer" containerID="2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f" Oct 11 10:04:03 crc kubenswrapper[5016]: E1011 10:04:03.510331 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f\": container with ID starting with 2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f not found: ID does not exist" containerID="2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.510368 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f"} err="failed to get container status \"2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f\": rpc error: code = NotFound desc = could not find container \"2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f\": container with ID starting with 2413a258a7350487c0993d0427e452a720c16bc455c5877bf864ea2105f4a82f not found: ID does not exist" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.565079 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc9916e7-fa10-4491-a199-f47bc8c2731f" (UID: "dc9916e7-fa10-4491-a199-f47bc8c2731f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.581598 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc9916e7-fa10-4491-a199-f47bc8c2731f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.734197 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2b47n"] Oct 11 10:04:03 crc kubenswrapper[5016]: I1011 10:04:03.745449 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2b47n"] Oct 11 10:04:05 crc kubenswrapper[5016]: I1011 10:04:05.145200 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9916e7-fa10-4491-a199-f47bc8c2731f" path="/var/lib/kubelet/pods/dc9916e7-fa10-4491-a199-f47bc8c2731f/volumes" Oct 11 10:04:09 crc kubenswrapper[5016]: I1011 10:04:09.133875 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:04:09 crc kubenswrapper[5016]: E1011 10:04:09.134610 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:04:15 crc kubenswrapper[5016]: I1011 10:04:15.651123 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6_ef59f675-6d27-45e7-b50c-5ff68f9d41d2/util/0.log" Oct 11 10:04:15 crc kubenswrapper[5016]: I1011 10:04:15.935305 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6_ef59f675-6d27-45e7-b50c-5ff68f9d41d2/pull/0.log" Oct 11 10:04:15 crc kubenswrapper[5016]: I1011 10:04:15.957266 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6_ef59f675-6d27-45e7-b50c-5ff68f9d41d2/util/0.log" Oct 11 10:04:16 crc kubenswrapper[5016]: I1011 10:04:16.069627 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6_ef59f675-6d27-45e7-b50c-5ff68f9d41d2/pull/0.log" Oct 11 10:04:16 crc kubenswrapper[5016]: I1011 10:04:16.207608 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6_ef59f675-6d27-45e7-b50c-5ff68f9d41d2/util/0.log" Oct 11 10:04:16 crc kubenswrapper[5016]: I1011 10:04:16.246842 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6_ef59f675-6d27-45e7-b50c-5ff68f9d41d2/extract/0.log" Oct 11 10:04:16 crc kubenswrapper[5016]: I1011 10:04:16.292833 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d29btq6_ef59f675-6d27-45e7-b50c-5ff68f9d41d2/pull/0.log" Oct 11 10:04:16 crc kubenswrapper[5016]: I1011 10:04:16.482906 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b6ccc_bbca8383-9f95-40bc-be54-9954ad04c402/extract-utilities/0.log" Oct 11 10:04:16 crc kubenswrapper[5016]: I1011 10:04:16.723205 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b6ccc_bbca8383-9f95-40bc-be54-9954ad04c402/extract-content/0.log" Oct 11 10:04:16 crc kubenswrapper[5016]: I1011 10:04:16.785700 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b6ccc_bbca8383-9f95-40bc-be54-9954ad04c402/extract-content/0.log" Oct 11 10:04:16 crc kubenswrapper[5016]: I1011 10:04:16.795616 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b6ccc_bbca8383-9f95-40bc-be54-9954ad04c402/extract-utilities/0.log" Oct 11 10:04:16 crc kubenswrapper[5016]: I1011 10:04:16.971590 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b6ccc_bbca8383-9f95-40bc-be54-9954ad04c402/extract-utilities/0.log" Oct 11 10:04:17 crc kubenswrapper[5016]: I1011 10:04:17.030252 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b6ccc_bbca8383-9f95-40bc-be54-9954ad04c402/extract-content/0.log" Oct 11 10:04:17 crc kubenswrapper[5016]: I1011 10:04:17.515183 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtz55_51f92ef1-71fb-40ad-a7d8-7a2c10420d14/extract-utilities/0.log" Oct 11 10:04:17 crc kubenswrapper[5016]: I1011 10:04:17.739777 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtz55_51f92ef1-71fb-40ad-a7d8-7a2c10420d14/extract-utilities/0.log" Oct 11 10:04:17 crc kubenswrapper[5016]: I1011 10:04:17.798009 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtz55_51f92ef1-71fb-40ad-a7d8-7a2c10420d14/extract-content/0.log" Oct 11 10:04:17 crc kubenswrapper[5016]: I1011 10:04:17.847782 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtz55_51f92ef1-71fb-40ad-a7d8-7a2c10420d14/extract-content/0.log" Oct 11 10:04:18 crc kubenswrapper[5016]: I1011 10:04:18.077929 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtz55_51f92ef1-71fb-40ad-a7d8-7a2c10420d14/extract-content/0.log" Oct 11 10:04:18 crc kubenswrapper[5016]: I1011 10:04:18.133517 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtz55_51f92ef1-71fb-40ad-a7d8-7a2c10420d14/extract-utilities/0.log" Oct 11 10:04:18 crc kubenswrapper[5016]: I1011 10:04:18.159602 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b6ccc_bbca8383-9f95-40bc-be54-9954ad04c402/registry-server/0.log" Oct 11 10:04:18 crc kubenswrapper[5016]: I1011 10:04:18.441692 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn_c8611f6d-0bd0-44e7-a594-cabd1aa63bfd/util/0.log" Oct 11 10:04:18 crc kubenswrapper[5016]: I1011 10:04:18.581732 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn_c8611f6d-0bd0-44e7-a594-cabd1aa63bfd/util/0.log" Oct 11 10:04:18 crc kubenswrapper[5016]: I1011 10:04:18.766111 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn_c8611f6d-0bd0-44e7-a594-cabd1aa63bfd/pull/0.log" Oct 11 10:04:18 crc kubenswrapper[5016]: I1011 10:04:18.778048 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn_c8611f6d-0bd0-44e7-a594-cabd1aa63bfd/pull/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:18.999604 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn_c8611f6d-0bd0-44e7-a594-cabd1aa63bfd/util/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.002842 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn_c8611f6d-0bd0-44e7-a594-cabd1aa63bfd/pull/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.049225 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cvf7mn_c8611f6d-0bd0-44e7-a594-cabd1aa63bfd/extract/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.229485 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2d7px_511b8fec-a727-401a-bfe9-8201786f9bea/marketplace-operator/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.415016 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rtz55_51f92ef1-71fb-40ad-a7d8-7a2c10420d14/registry-server/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.490167 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9kxx6_97d69861-d719-49f6-92de-8cc49752f215/extract-utilities/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.743143 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9kxx6_97d69861-d719-49f6-92de-8cc49752f215/extract-content/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.754891 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9kxx6_97d69861-d719-49f6-92de-8cc49752f215/extract-utilities/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.764406 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9kxx6_97d69861-d719-49f6-92de-8cc49752f215/extract-content/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.961568 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9kxx6_97d69861-d719-49f6-92de-8cc49752f215/extract-utilities/0.log" Oct 11 10:04:19 crc kubenswrapper[5016]: I1011 10:04:19.961953 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9kxx6_97d69861-d719-49f6-92de-8cc49752f215/extract-content/0.log" Oct 11 10:04:20 crc kubenswrapper[5016]: I1011 10:04:20.032016 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x7c7l_cd1f27a8-8756-4d42-9894-3e7fa9107b44/extract-utilities/0.log" Oct 11 10:04:20 crc kubenswrapper[5016]: I1011 10:04:20.133788 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:04:20 crc kubenswrapper[5016]: E1011 10:04:20.134290 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:04:20 crc kubenswrapper[5016]: I1011 10:04:20.258429 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-9kxx6_97d69861-d719-49f6-92de-8cc49752f215/registry-server/0.log" Oct 11 10:04:20 crc kubenswrapper[5016]: I1011 10:04:20.268070 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x7c7l_cd1f27a8-8756-4d42-9894-3e7fa9107b44/extract-content/0.log" Oct 11 10:04:20 crc kubenswrapper[5016]: I1011 10:04:20.286493 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x7c7l_cd1f27a8-8756-4d42-9894-3e7fa9107b44/extract-content/0.log" Oct 11 10:04:20 crc kubenswrapper[5016]: I1011 10:04:20.297476 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x7c7l_cd1f27a8-8756-4d42-9894-3e7fa9107b44/extract-utilities/0.log" Oct 11 10:04:20 crc kubenswrapper[5016]: I1011 10:04:20.530235 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x7c7l_cd1f27a8-8756-4d42-9894-3e7fa9107b44/extract-utilities/0.log" Oct 11 10:04:20 crc kubenswrapper[5016]: I1011 10:04:20.533726 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x7c7l_cd1f27a8-8756-4d42-9894-3e7fa9107b44/extract-content/0.log" Oct 11 10:04:21 crc kubenswrapper[5016]: I1011 10:04:21.455035 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-x7c7l_cd1f27a8-8756-4d42-9894-3e7fa9107b44/registry-server/0.log" Oct 11 10:04:34 crc kubenswrapper[5016]: I1011 10:04:34.133826 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:04:34 crc kubenswrapper[5016]: E1011 10:04:34.134979 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:04:48 crc kubenswrapper[5016]: I1011 10:04:48.133417 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:04:48 crc kubenswrapper[5016]: E1011 10:04:48.134084 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:04:58 crc kubenswrapper[5016]: E1011 10:04:58.133686 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 10:05:01 crc kubenswrapper[5016]: I1011 10:05:01.133385 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:05:01 crc kubenswrapper[5016]: E1011 10:05:01.134232 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:05:12 crc kubenswrapper[5016]: I1011 10:05:12.135236 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:05:12 crc kubenswrapper[5016]: E1011 10:05:12.136366 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:05:23 crc kubenswrapper[5016]: I1011 10:05:23.144571 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:05:23 crc kubenswrapper[5016]: E1011 10:05:23.145781 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:05:35 crc kubenswrapper[5016]: I1011 10:05:35.134555 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:05:35 crc kubenswrapper[5016]: E1011 10:05:35.135965 5016 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-49bvc_openshift-machine-config-operator(0633ed26-7b6a-4a20-92ba-569891d9faff)\"" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" Oct 11 10:05:47 crc kubenswrapper[5016]: I1011 10:05:47.134235 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:05:47 crc kubenswrapper[5016]: I1011 10:05:47.723540 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"c2bf2afed5cb941fe305d49963bd6a2f38161ee64be4ab662d10eb4dfe4f825e"} Oct 11 10:06:24 crc kubenswrapper[5016]: E1011 10:06:24.133368 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 10:06:50 crc kubenswrapper[5016]: I1011 10:06:50.561958 5016 generic.go:334] "Generic (PLEG): container finished" podID="b93db13a-8e27-49d3-897e-33e59fe40941" containerID="79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7" exitCode=0 Oct 11 10:06:50 crc kubenswrapper[5016]: I1011 10:06:50.562073 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cpk4d/must-gather-r7724" event={"ID":"b93db13a-8e27-49d3-897e-33e59fe40941","Type":"ContainerDied","Data":"79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7"} Oct 11 10:06:50 crc kubenswrapper[5016]: I1011 10:06:50.564354 5016 scope.go:117] "RemoveContainer" containerID="79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7" Oct 11 10:06:50 crc kubenswrapper[5016]: I1011 10:06:50.955419 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cpk4d_must-gather-r7724_b93db13a-8e27-49d3-897e-33e59fe40941/gather/0.log" Oct 11 10:06:59 crc kubenswrapper[5016]: I1011 10:06:59.639833 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cpk4d/must-gather-r7724"] Oct 11 10:06:59 crc kubenswrapper[5016]: I1011 10:06:59.640645 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cpk4d/must-gather-r7724" podUID="b93db13a-8e27-49d3-897e-33e59fe40941" containerName="copy" containerID="cri-o://9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f" gracePeriod=2 Oct 11 10:06:59 crc kubenswrapper[5016]: I1011 10:06:59.651767 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cpk4d/must-gather-r7724"] Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.138353 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cpk4d_must-gather-r7724_b93db13a-8e27-49d3-897e-33e59fe40941/copy/0.log" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.138968 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.311067 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf5w7\" (UniqueName: \"kubernetes.io/projected/b93db13a-8e27-49d3-897e-33e59fe40941-kube-api-access-tf5w7\") pod \"b93db13a-8e27-49d3-897e-33e59fe40941\" (UID: \"b93db13a-8e27-49d3-897e-33e59fe40941\") " Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.311211 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93db13a-8e27-49d3-897e-33e59fe40941-must-gather-output\") pod \"b93db13a-8e27-49d3-897e-33e59fe40941\" (UID: \"b93db13a-8e27-49d3-897e-33e59fe40941\") " Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.320229 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93db13a-8e27-49d3-897e-33e59fe40941-kube-api-access-tf5w7" (OuterVolumeSpecName: "kube-api-access-tf5w7") pod "b93db13a-8e27-49d3-897e-33e59fe40941" (UID: "b93db13a-8e27-49d3-897e-33e59fe40941"). InnerVolumeSpecName "kube-api-access-tf5w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.413565 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf5w7\" (UniqueName: \"kubernetes.io/projected/b93db13a-8e27-49d3-897e-33e59fe40941-kube-api-access-tf5w7\") on node \"crc\" DevicePath \"\"" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.522131 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93db13a-8e27-49d3-897e-33e59fe40941-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b93db13a-8e27-49d3-897e-33e59fe40941" (UID: "b93db13a-8e27-49d3-897e-33e59fe40941"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.617600 5016 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b93db13a-8e27-49d3-897e-33e59fe40941-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.695840 5016 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cpk4d_must-gather-r7724_b93db13a-8e27-49d3-897e-33e59fe40941/copy/0.log" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.696514 5016 generic.go:334] "Generic (PLEG): container finished" podID="b93db13a-8e27-49d3-897e-33e59fe40941" containerID="9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f" exitCode=143 Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.696592 5016 scope.go:117] "RemoveContainer" containerID="9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.696760 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cpk4d/must-gather-r7724" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.730927 5016 scope.go:117] "RemoveContainer" containerID="79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.841164 5016 scope.go:117] "RemoveContainer" containerID="9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f" Oct 11 10:07:00 crc kubenswrapper[5016]: E1011 10:07:00.841794 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f\": container with ID starting with 9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f not found: ID does not exist" containerID="9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.841837 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f"} err="failed to get container status \"9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f\": rpc error: code = NotFound desc = could not find container \"9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f\": container with ID starting with 9f1b013594e5ee3ab1c42f5ca4636470d57d64ea93a868501f362c7e43524c1f not found: ID does not exist" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.841865 5016 scope.go:117] "RemoveContainer" containerID="79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7" Oct 11 10:07:00 crc kubenswrapper[5016]: E1011 10:07:00.842344 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7\": container with ID starting with 79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7 not found: ID does not exist" containerID="79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7" Oct 11 10:07:00 crc kubenswrapper[5016]: I1011 10:07:00.842379 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7"} err="failed to get container status \"79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7\": rpc error: code = NotFound desc = could not find container \"79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7\": container with ID starting with 79ca5e29e806fa1e500c7456e4ddfc77bdc26124b4cb9b36bdc5a5c5b0084ab7 not found: ID does not exist" Oct 11 10:07:01 crc kubenswrapper[5016]: I1011 10:07:01.145363 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93db13a-8e27-49d3-897e-33e59fe40941" path="/var/lib/kubelet/pods/b93db13a-8e27-49d3-897e-33e59fe40941/volumes" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.108514 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qxlxl"] Oct 11 10:07:18 crc kubenswrapper[5016]: E1011 10:07:18.109896 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerName="extract-utilities" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.109922 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerName="extract-utilities" Oct 11 10:07:18 crc kubenswrapper[5016]: E1011 10:07:18.109944 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerName="extract-content" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.109954 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerName="extract-content" Oct 11 10:07:18 crc kubenswrapper[5016]: E1011 10:07:18.109990 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93db13a-8e27-49d3-897e-33e59fe40941" containerName="copy" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.110004 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93db13a-8e27-49d3-897e-33e59fe40941" containerName="copy" Oct 11 10:07:18 crc kubenswrapper[5016]: E1011 10:07:18.110024 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93db13a-8e27-49d3-897e-33e59fe40941" containerName="gather" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.110033 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93db13a-8e27-49d3-897e-33e59fe40941" containerName="gather" Oct 11 10:07:18 crc kubenswrapper[5016]: E1011 10:07:18.110058 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerName="registry-server" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.110070 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerName="registry-server" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.110411 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93db13a-8e27-49d3-897e-33e59fe40941" containerName="copy" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.110465 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9916e7-fa10-4491-a199-f47bc8c2731f" containerName="registry-server" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.110493 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93db13a-8e27-49d3-897e-33e59fe40941" containerName="gather" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.112528 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.118798 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxlxl"] Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.225365 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-catalog-content\") pod \"redhat-marketplace-qxlxl\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.225493 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w95zz\" (UniqueName: \"kubernetes.io/projected/de0649c6-ab04-4b5a-8eb1-64461482c8bb-kube-api-access-w95zz\") pod \"redhat-marketplace-qxlxl\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.225524 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-utilities\") pod \"redhat-marketplace-qxlxl\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.327684 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-catalog-content\") pod \"redhat-marketplace-qxlxl\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.327931 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w95zz\" (UniqueName: \"kubernetes.io/projected/de0649c6-ab04-4b5a-8eb1-64461482c8bb-kube-api-access-w95zz\") pod \"redhat-marketplace-qxlxl\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.327979 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-utilities\") pod \"redhat-marketplace-qxlxl\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.328329 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-utilities\") pod \"redhat-marketplace-qxlxl\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.328330 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-catalog-content\") pod \"redhat-marketplace-qxlxl\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.351509 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w95zz\" (UniqueName: \"kubernetes.io/projected/de0649c6-ab04-4b5a-8eb1-64461482c8bb-kube-api-access-w95zz\") pod \"redhat-marketplace-qxlxl\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.447641 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.740902 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxlxl"] Oct 11 10:07:18 crc kubenswrapper[5016]: I1011 10:07:18.930327 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxlxl" event={"ID":"de0649c6-ab04-4b5a-8eb1-64461482c8bb","Type":"ContainerStarted","Data":"5e4f0d32314db3d7c3a429239ecabfa83932199102830827bc1574256723f147"} Oct 11 10:07:19 crc kubenswrapper[5016]: I1011 10:07:19.942019 5016 generic.go:334] "Generic (PLEG): container finished" podID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerID="03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282" exitCode=0 Oct 11 10:07:19 crc kubenswrapper[5016]: I1011 10:07:19.942081 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxlxl" event={"ID":"de0649c6-ab04-4b5a-8eb1-64461482c8bb","Type":"ContainerDied","Data":"03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282"} Oct 11 10:07:19 crc kubenswrapper[5016]: I1011 10:07:19.944557 5016 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 11 10:07:20 crc kubenswrapper[5016]: I1011 10:07:20.962291 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxlxl" event={"ID":"de0649c6-ab04-4b5a-8eb1-64461482c8bb","Type":"ContainerStarted","Data":"5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38"} Oct 11 10:07:21 crc kubenswrapper[5016]: I1011 10:07:21.979392 5016 generic.go:334] "Generic (PLEG): container finished" podID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerID="5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38" exitCode=0 Oct 11 10:07:21 crc kubenswrapper[5016]: I1011 10:07:21.979483 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxlxl" event={"ID":"de0649c6-ab04-4b5a-8eb1-64461482c8bb","Type":"ContainerDied","Data":"5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38"} Oct 11 10:07:22 crc kubenswrapper[5016]: I1011 10:07:22.995638 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxlxl" event={"ID":"de0649c6-ab04-4b5a-8eb1-64461482c8bb","Type":"ContainerStarted","Data":"a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5"} Oct 11 10:07:23 crc kubenswrapper[5016]: I1011 10:07:23.034417 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qxlxl" podStartSLOduration=2.594559833 podStartE2EDuration="5.034394745s" podCreationTimestamp="2025-10-11 10:07:18 +0000 UTC" firstStartedPulling="2025-10-11 10:07:19.944208738 +0000 UTC m=+8827.844664694" lastFinishedPulling="2025-10-11 10:07:22.38404361 +0000 UTC m=+8830.284499606" observedRunningTime="2025-10-11 10:07:23.021187404 +0000 UTC m=+8830.921643370" watchObservedRunningTime="2025-10-11 10:07:23.034394745 +0000 UTC m=+8830.934850691" Oct 11 10:07:28 crc kubenswrapper[5016]: I1011 10:07:28.448396 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:28 crc kubenswrapper[5016]: I1011 10:07:28.450528 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:28 crc kubenswrapper[5016]: I1011 10:07:28.507757 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:29 crc kubenswrapper[5016]: I1011 10:07:29.131160 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:29 crc kubenswrapper[5016]: I1011 10:07:29.198276 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxlxl"] Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.088566 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qxlxl" podUID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerName="registry-server" containerID="cri-o://a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5" gracePeriod=2 Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.648207 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.829504 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w95zz\" (UniqueName: \"kubernetes.io/projected/de0649c6-ab04-4b5a-8eb1-64461482c8bb-kube-api-access-w95zz\") pod \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.829685 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-utilities\") pod \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.829759 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-catalog-content\") pod \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\" (UID: \"de0649c6-ab04-4b5a-8eb1-64461482c8bb\") " Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.831167 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-utilities" (OuterVolumeSpecName: "utilities") pod "de0649c6-ab04-4b5a-8eb1-64461482c8bb" (UID: "de0649c6-ab04-4b5a-8eb1-64461482c8bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.835929 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0649c6-ab04-4b5a-8eb1-64461482c8bb-kube-api-access-w95zz" (OuterVolumeSpecName: "kube-api-access-w95zz") pod "de0649c6-ab04-4b5a-8eb1-64461482c8bb" (UID: "de0649c6-ab04-4b5a-8eb1-64461482c8bb"). InnerVolumeSpecName "kube-api-access-w95zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.860007 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de0649c6-ab04-4b5a-8eb1-64461482c8bb" (UID: "de0649c6-ab04-4b5a-8eb1-64461482c8bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.932420 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w95zz\" (UniqueName: \"kubernetes.io/projected/de0649c6-ab04-4b5a-8eb1-64461482c8bb-kube-api-access-w95zz\") on node \"crc\" DevicePath \"\"" Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.932456 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 10:07:31 crc kubenswrapper[5016]: I1011 10:07:31.932467 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de0649c6-ab04-4b5a-8eb1-64461482c8bb-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.106176 5016 generic.go:334] "Generic (PLEG): container finished" podID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerID="a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5" exitCode=0 Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.106265 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qxlxl" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.106266 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxlxl" event={"ID":"de0649c6-ab04-4b5a-8eb1-64461482c8bb","Type":"ContainerDied","Data":"a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5"} Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.106382 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qxlxl" event={"ID":"de0649c6-ab04-4b5a-8eb1-64461482c8bb","Type":"ContainerDied","Data":"5e4f0d32314db3d7c3a429239ecabfa83932199102830827bc1574256723f147"} Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.106415 5016 scope.go:117] "RemoveContainer" containerID="a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.133357 5016 scope.go:117] "RemoveContainer" containerID="5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.154918 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxlxl"] Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.171545 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qxlxl"] Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.186423 5016 scope.go:117] "RemoveContainer" containerID="03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.255422 5016 scope.go:117] "RemoveContainer" containerID="a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5" Oct 11 10:07:32 crc kubenswrapper[5016]: E1011 10:07:32.256479 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5\": container with ID starting with a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5 not found: ID does not exist" containerID="a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.256719 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5"} err="failed to get container status \"a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5\": rpc error: code = NotFound desc = could not find container \"a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5\": container with ID starting with a2a2b3c5bfb6bfb746c8a345d30b554aff6263dcf576c0b1f2a4b60fae97dfa5 not found: ID does not exist" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.256874 5016 scope.go:117] "RemoveContainer" containerID="5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38" Oct 11 10:07:32 crc kubenswrapper[5016]: E1011 10:07:32.257451 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38\": container with ID starting with 5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38 not found: ID does not exist" containerID="5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.257517 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38"} err="failed to get container status \"5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38\": rpc error: code = NotFound desc = could not find container \"5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38\": container with ID starting with 5b53826146e71d630e13627d7c5be9fa3f1e231d79a4be167a4ce24d923c0c38 not found: ID does not exist" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.257558 5016 scope.go:117] "RemoveContainer" containerID="03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282" Oct 11 10:07:32 crc kubenswrapper[5016]: E1011 10:07:32.257987 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282\": container with ID starting with 03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282 not found: ID does not exist" containerID="03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282" Oct 11 10:07:32 crc kubenswrapper[5016]: I1011 10:07:32.258022 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282"} err="failed to get container status \"03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282\": rpc error: code = NotFound desc = could not find container \"03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282\": container with ID starting with 03dc8c10278b966a7bbaeacef7a6d5769d64ef561232cdf1297ff3a911d90282 not found: ID does not exist" Oct 11 10:07:33 crc kubenswrapper[5016]: I1011 10:07:33.149726 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" path="/var/lib/kubelet/pods/de0649c6-ab04-4b5a-8eb1-64461482c8bb/volumes" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.634186 5016 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x8tsz"] Oct 11 10:07:38 crc kubenswrapper[5016]: E1011 10:07:38.635542 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerName="extract-utilities" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.635566 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerName="extract-utilities" Oct 11 10:07:38 crc kubenswrapper[5016]: E1011 10:07:38.635612 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerName="extract-content" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.635626 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerName="extract-content" Oct 11 10:07:38 crc kubenswrapper[5016]: E1011 10:07:38.635712 5016 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerName="registry-server" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.635728 5016 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerName="registry-server" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.636135 5016 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0649c6-ab04-4b5a-8eb1-64461482c8bb" containerName="registry-server" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.638752 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.671757 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8tsz"] Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.818376 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwbk\" (UniqueName: \"kubernetes.io/projected/d3356a7e-d170-443b-b65b-06ff6cbc6033-kube-api-access-xmwbk\") pod \"certified-operators-x8tsz\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.818465 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-catalog-content\") pod \"certified-operators-x8tsz\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.818631 5016 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-utilities\") pod \"certified-operators-x8tsz\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.920572 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmwbk\" (UniqueName: \"kubernetes.io/projected/d3356a7e-d170-443b-b65b-06ff6cbc6033-kube-api-access-xmwbk\") pod \"certified-operators-x8tsz\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.920732 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-catalog-content\") pod \"certified-operators-x8tsz\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.920843 5016 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-utilities\") pod \"certified-operators-x8tsz\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.921490 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-utilities\") pod \"certified-operators-x8tsz\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.921584 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-catalog-content\") pod \"certified-operators-x8tsz\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.950880 5016 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmwbk\" (UniqueName: \"kubernetes.io/projected/d3356a7e-d170-443b-b65b-06ff6cbc6033-kube-api-access-xmwbk\") pod \"certified-operators-x8tsz\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:38 crc kubenswrapper[5016]: I1011 10:07:38.981628 5016 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:39 crc kubenswrapper[5016]: I1011 10:07:39.516665 5016 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8tsz"] Oct 11 10:07:40 crc kubenswrapper[5016]: I1011 10:07:40.229381 5016 generic.go:334] "Generic (PLEG): container finished" podID="d3356a7e-d170-443b-b65b-06ff6cbc6033" containerID="473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4" exitCode=0 Oct 11 10:07:40 crc kubenswrapper[5016]: I1011 10:07:40.229508 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8tsz" event={"ID":"d3356a7e-d170-443b-b65b-06ff6cbc6033","Type":"ContainerDied","Data":"473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4"} Oct 11 10:07:40 crc kubenswrapper[5016]: I1011 10:07:40.229895 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8tsz" event={"ID":"d3356a7e-d170-443b-b65b-06ff6cbc6033","Type":"ContainerStarted","Data":"c626ef840616f4c1342d7995d96bb081a893b5c42649bf167fa147a889ecc174"} Oct 11 10:07:41 crc kubenswrapper[5016]: I1011 10:07:41.245089 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8tsz" event={"ID":"d3356a7e-d170-443b-b65b-06ff6cbc6033","Type":"ContainerStarted","Data":"fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759"} Oct 11 10:07:43 crc kubenswrapper[5016]: I1011 10:07:43.285019 5016 generic.go:334] "Generic (PLEG): container finished" podID="d3356a7e-d170-443b-b65b-06ff6cbc6033" containerID="fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759" exitCode=0 Oct 11 10:07:43 crc kubenswrapper[5016]: I1011 10:07:43.285095 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8tsz" event={"ID":"d3356a7e-d170-443b-b65b-06ff6cbc6033","Type":"ContainerDied","Data":"fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759"} Oct 11 10:07:44 crc kubenswrapper[5016]: I1011 10:07:44.297576 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8tsz" event={"ID":"d3356a7e-d170-443b-b65b-06ff6cbc6033","Type":"ContainerStarted","Data":"53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40"} Oct 11 10:07:44 crc kubenswrapper[5016]: I1011 10:07:44.330271 5016 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x8tsz" podStartSLOduration=2.8565238219999998 podStartE2EDuration="6.330244573s" podCreationTimestamp="2025-10-11 10:07:38 +0000 UTC" firstStartedPulling="2025-10-11 10:07:40.232487528 +0000 UTC m=+8848.132943504" lastFinishedPulling="2025-10-11 10:07:43.706208299 +0000 UTC m=+8851.606664255" observedRunningTime="2025-10-11 10:07:44.322910438 +0000 UTC m=+8852.223366424" watchObservedRunningTime="2025-10-11 10:07:44.330244573 +0000 UTC m=+8852.230700519" Oct 11 10:07:48 crc kubenswrapper[5016]: E1011 10:07:48.135028 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 10:07:48 crc kubenswrapper[5016]: I1011 10:07:48.983193 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:48 crc kubenswrapper[5016]: I1011 10:07:48.983354 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:49 crc kubenswrapper[5016]: I1011 10:07:49.047254 5016 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:49 crc kubenswrapper[5016]: I1011 10:07:49.429051 5016 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:49 crc kubenswrapper[5016]: I1011 10:07:49.499128 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8tsz"] Oct 11 10:07:51 crc kubenswrapper[5016]: I1011 10:07:51.412113 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x8tsz" podUID="d3356a7e-d170-443b-b65b-06ff6cbc6033" containerName="registry-server" containerID="cri-o://53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40" gracePeriod=2 Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.038362 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.150459 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-catalog-content\") pod \"d3356a7e-d170-443b-b65b-06ff6cbc6033\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.151008 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-utilities\") pod \"d3356a7e-d170-443b-b65b-06ff6cbc6033\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.151256 5016 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmwbk\" (UniqueName: \"kubernetes.io/projected/d3356a7e-d170-443b-b65b-06ff6cbc6033-kube-api-access-xmwbk\") pod \"d3356a7e-d170-443b-b65b-06ff6cbc6033\" (UID: \"d3356a7e-d170-443b-b65b-06ff6cbc6033\") " Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.153726 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-utilities" (OuterVolumeSpecName: "utilities") pod "d3356a7e-d170-443b-b65b-06ff6cbc6033" (UID: "d3356a7e-d170-443b-b65b-06ff6cbc6033"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.163256 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3356a7e-d170-443b-b65b-06ff6cbc6033-kube-api-access-xmwbk" (OuterVolumeSpecName: "kube-api-access-xmwbk") pod "d3356a7e-d170-443b-b65b-06ff6cbc6033" (UID: "d3356a7e-d170-443b-b65b-06ff6cbc6033"). InnerVolumeSpecName "kube-api-access-xmwbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.257367 5016 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmwbk\" (UniqueName: \"kubernetes.io/projected/d3356a7e-d170-443b-b65b-06ff6cbc6033-kube-api-access-xmwbk\") on node \"crc\" DevicePath \"\"" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.257413 5016 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-utilities\") on node \"crc\" DevicePath \"\"" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.431326 5016 generic.go:334] "Generic (PLEG): container finished" podID="d3356a7e-d170-443b-b65b-06ff6cbc6033" containerID="53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40" exitCode=0 Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.431443 5016 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8tsz" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.431449 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8tsz" event={"ID":"d3356a7e-d170-443b-b65b-06ff6cbc6033","Type":"ContainerDied","Data":"53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40"} Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.431749 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8tsz" event={"ID":"d3356a7e-d170-443b-b65b-06ff6cbc6033","Type":"ContainerDied","Data":"c626ef840616f4c1342d7995d96bb081a893b5c42649bf167fa147a889ecc174"} Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.431837 5016 scope.go:117] "RemoveContainer" containerID="53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.468689 5016 scope.go:117] "RemoveContainer" containerID="fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.516952 5016 scope.go:117] "RemoveContainer" containerID="473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.576524 5016 scope.go:117] "RemoveContainer" containerID="53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40" Oct 11 10:07:52 crc kubenswrapper[5016]: E1011 10:07:52.577216 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40\": container with ID starting with 53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40 not found: ID does not exist" containerID="53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.577292 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40"} err="failed to get container status \"53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40\": rpc error: code = NotFound desc = could not find container \"53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40\": container with ID starting with 53de43c2d0a10651f9fe3b78a9f32624816b9975bc69d24e3e89afc611770f40 not found: ID does not exist" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.577336 5016 scope.go:117] "RemoveContainer" containerID="fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759" Oct 11 10:07:52 crc kubenswrapper[5016]: E1011 10:07:52.577993 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759\": container with ID starting with fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759 not found: ID does not exist" containerID="fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.578082 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759"} err="failed to get container status \"fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759\": rpc error: code = NotFound desc = could not find container \"fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759\": container with ID starting with fc0a6015e9e44500772c7acd42e6b43d1e54892bbb4e56cf17a40ed684c2d759 not found: ID does not exist" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.578152 5016 scope.go:117] "RemoveContainer" containerID="473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4" Oct 11 10:07:52 crc kubenswrapper[5016]: E1011 10:07:52.578731 5016 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4\": container with ID starting with 473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4 not found: ID does not exist" containerID="473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.578779 5016 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4"} err="failed to get container status \"473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4\": rpc error: code = NotFound desc = could not find container \"473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4\": container with ID starting with 473d07f1639eeb0e645db617b8b263ee6a7adf7137885bb2eb83b0fe9f324da4 not found: ID does not exist" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.646308 5016 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3356a7e-d170-443b-b65b-06ff6cbc6033" (UID: "d3356a7e-d170-443b-b65b-06ff6cbc6033"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.668058 5016 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3356a7e-d170-443b-b65b-06ff6cbc6033-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.777915 5016 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8tsz"] Oct 11 10:07:52 crc kubenswrapper[5016]: I1011 10:07:52.784953 5016 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x8tsz"] Oct 11 10:07:53 crc kubenswrapper[5016]: I1011 10:07:53.155100 5016 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3356a7e-d170-443b-b65b-06ff6cbc6033" path="/var/lib/kubelet/pods/d3356a7e-d170-443b-b65b-06ff6cbc6033/volumes" Oct 11 10:08:07 crc kubenswrapper[5016]: I1011 10:08:07.122753 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 10:08:07 crc kubenswrapper[5016]: I1011 10:08:07.124355 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 10:08:37 crc kubenswrapper[5016]: I1011 10:08:37.122785 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 10:08:37 crc kubenswrapper[5016]: I1011 10:08:37.123746 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 10:08:45 crc kubenswrapper[5016]: I1011 10:08:45.701360 5016 scope.go:117] "RemoveContainer" containerID="8af89672b025201b9aaacf07456c6b0a22f71cf2b4912f1cc5e0c87b9ed90402" Oct 11 10:08:54 crc kubenswrapper[5016]: E1011 10:08:54.134247 5016 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Oct 11 10:09:07 crc kubenswrapper[5016]: I1011 10:09:07.122718 5016 patch_prober.go:28] interesting pod/machine-config-daemon-49bvc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 11 10:09:07 crc kubenswrapper[5016]: I1011 10:09:07.123688 5016 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 11 10:09:07 crc kubenswrapper[5016]: I1011 10:09:07.123761 5016 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" Oct 11 10:09:07 crc kubenswrapper[5016]: I1011 10:09:07.125030 5016 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2bf2afed5cb941fe305d49963bd6a2f38161ee64be4ab662d10eb4dfe4f825e"} pod="openshift-machine-config-operator/machine-config-daemon-49bvc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 11 10:09:07 crc kubenswrapper[5016]: I1011 10:09:07.125138 5016 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" podUID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerName="machine-config-daemon" containerID="cri-o://c2bf2afed5cb941fe305d49963bd6a2f38161ee64be4ab662d10eb4dfe4f825e" gracePeriod=600 Oct 11 10:09:07 crc kubenswrapper[5016]: I1011 10:09:07.375066 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerDied","Data":"c2bf2afed5cb941fe305d49963bd6a2f38161ee64be4ab662d10eb4dfe4f825e"} Oct 11 10:09:07 crc kubenswrapper[5016]: I1011 10:09:07.375705 5016 scope.go:117] "RemoveContainer" containerID="22c9a9509bed25f42973f16f6fc5f23c794ea6f431687a6145f44aa3150aaea0" Oct 11 10:09:07 crc kubenswrapper[5016]: I1011 10:09:07.375011 5016 generic.go:334] "Generic (PLEG): container finished" podID="0633ed26-7b6a-4a20-92ba-569891d9faff" containerID="c2bf2afed5cb941fe305d49963bd6a2f38161ee64be4ab662d10eb4dfe4f825e" exitCode=0 Oct 11 10:09:08 crc kubenswrapper[5016]: I1011 10:09:08.393350 5016 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-49bvc" event={"ID":"0633ed26-7b6a-4a20-92ba-569891d9faff","Type":"ContainerStarted","Data":"1d75e0ff9402c98333f8264acf50308a06ec285e0a3a57c5bf7865a84b3e2d4b"}